πŸ€– AI Explained
Fast-moving: verify before relying on this 8 min read

EU AI Act & Governance, Risk, and Compliance

The EU AI Act is the first comprehensive binding regulation for AI systems. It classifies AI by risk tier, imposes strict obligations on high-risk deployments, and prohibits specific uses outright. This module covers what you must do, what you cannot do, and how to determine which rules apply to your system.

Layer 1: Surface

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and applies in phases through 2026 and beyond. It governs AI systems placed on the EU market or affecting EU persons β€” regardless of where the developer is based.

Risk classification tiers:

TierWhat qualifiesStatus
ProhibitedSocial scoring, real-time biometric surveillance in public, manipulative AI targeting vulnerable groupsBanned. No compliance path.
High-riskHiring, credit scoring, critical infrastructure control, education assessment, law enforcement, border control, administration of justiceConformity assessment, audit logs, human oversight, bias monitoring required
Limited riskChatbots, deepfakes, emotion recognition systemsTransparency obligations (disclose AI, label synthetic content)
Minimal riskSpam filters, AI in video games, content recommendationNo specific obligations

The critical insight: classification is use-case specific. A document summarisation tool is minimal risk. The same tool deployed to summarise evidence in criminal proceedings is high risk.

Implementation timeline (key dates):

DateWhat takes effect
2 Feb 2025Prohibited AI provisions apply
2 Aug 2025GPAI model obligations apply (foundation model providers)
2 Aug 2026High-risk AI obligations fully apply

Layer 2: Guided

Determining your risk classification

Work through these questions in order:

Step 1 β€” Is this use case prohibited?

  • Social scoring by public authorities? β†’ Prohibited
  • Real-time remote biometric identification in public spaces? β†’ Prohibited (with narrow law enforcement exceptions)
  • Subliminal manipulation of behaviour? β†’ Prohibited
  • Exploiting vulnerabilities of specific groups (age, disability)? β†’ Prohibited

If any answer is yes, stop. There is no compliance path.

Step 2 β€” Is this use case high-risk?

Check Annex III of the regulation. High-risk categories include:

  • Biometric categorisation affecting legal status
  • Critical infrastructure management (water, gas, electricity, traffic)
  • Education and vocational training (admissions, assessment)
  • Employment, worker management, access to self-employment (including CV screening, performance monitoring)
  • Essential private/public services (credit scoring, insurance risk, emergency services)
  • Law enforcement (evidence evaluation, risk profiling)
  • Migration, asylum, border control
  • Administration of justice and democratic processes

Step 3 β€” Does a transparency obligation apply?

  • AI interacting with humans without disclosure? β†’ Disclose it is AI
  • Synthetic media (deepfakes)? β†’ Label as AI-generated
  • Emotion recognition or biometric categorisation? β†’ Inform affected persons

Step 4 β€” Does the GPAI model obligation apply? If you are a foundation model provider (not a deployer of a model built by someone else), systemic-risk provisions apply from August 2025. This module focuses on deployers β€” see the EU AI Office guidance for providers.

High-risk compliance obligations checklist

For high-risk systems, the Act requires:

β–‘ Risk management system (documented, ongoing)
β–‘ Training, validation, and test data governance
β–‘ Technical documentation before market placement
β–‘ Logging and audit trails (automatic event logging)
β–‘ Transparency to deployers (usage instructions, capabilities, limitations)
β–‘ Human oversight design (ability to override, stop, correct)
β–‘ Accuracy, robustness, and cybersecurity measures
β–‘ Conformity assessment (self-assessment for most; third-party for some)
β–‘ EU Declaration of Conformity
β–‘ CE marking and registration in EU AI database
β–‘ Post-market monitoring plan

Bias detection and hallucination monitoring

The Act requires high-risk systems to achieve β€œappropriate levels of accuracy, robustness, and cybersecurity.” For AI systems, this translates to:

Bias detection: Monitor output distributions across protected characteristics (age, gender, nationality). For hiring tools, track acceptance rates per demographic group. Statistical parity difference > 5% is a common threshold requiring investigation.

Hallucination monitoring: For systems making consequential factual claims (legal, medical, financial), implement automated faithfulness scoring on a sample of outputs. LLM-as-judge or RAGAS faithfulness metrics work for this.

Logging requirements: High-risk systems must log inputs, outputs, and decision rationale with enough detail to reconstruct any individual decision. Logs must be retained for the duration of the system’s lifespan or at least 10 years for certain categories.


Layer 3: Deep Dive

Structural differences from GDPR

Organisations that handled GDPR compliance will encounter familiar concepts (risk assessment, documentation, human oversight) alongside fundamentally different ones.

GDPR focuses on data processing β€” who processes personal data, for what purpose, and with what legal basis. GDPR compliance is primarily a question of lawful processing.

EU AI Act focuses on the decision-making system β€” how it is designed, tested, monitored, and controlled. The Act regulates the AI system regardless of whether it processes personal data. An AI system that makes decisions using only publicly available data can still be high-risk under the Act.

Key differences:

DimensionGDPREU AI Act
TriggerPersonal data processingPlacing AI on EU market or affecting EU persons
Risk modelBy data category and purposeBy use case and sector
DocumentationPrivacy notices, DPIATechnical documentation, conformity assessment
Human oversightRight to explanation, human review of automated decisionsDesign requirement β€” system must enable override
EnforcementData Protection AuthoritiesNational market surveillance + EU AI Office

The β€œgeneral purpose AI” (GPAI) model regime

Foundation models β€” large models trained on broad data that can be used across many tasks β€” face a separate obligation track:

  • All GPAI providers: transparency requirements, technical documentation, copyright policy, training data summary
  • GPAI with systemic risk (capable models above a compute threshold, currently 10²⁡ FLOPs): adversarial testing, incident reporting, cybersecurity measures, energy efficiency reporting

For most enterprise deployers, the GPAI regime applies to your AI provider (Anthropic, OpenAI, Google), not to you. Your obligations as a deployer are under the high-risk or limited-risk tier based on your use case.

Governance programme structure

A minimal EU AI Act governance programme for an enterprise deployer:

  1. AI inventory: Register every AI system in use, with use case, data inputs, decision type, and affected persons. This is the prerequisite for all classification work.

  2. Classification decisions: For each system, document the risk classification with rationale. Treat this as a legal document β€” it should be reviewed by legal counsel and updated when the use case changes.

  3. High-risk management system: For each high-risk system, maintain a living risk register, testing records, and audit log demonstrating ongoing compliance.

  4. Fundamental rights impact assessment: Required before deploying high-risk systems. Similar in structure to GDPR’s DPIA but focused on fundamental rights (non-discrimination, privacy, due process) rather than data protection specifically.

  5. Incident reporting: High-risk AI incidents causing serious harm must be reported to national market surveillance authorities. Establish a reporting process before you need it.

Further reading

  • EU AI Act full text; Official Journal of the European Union, 2024. The binding regulation text; Annex III (high-risk categories) and Annex IV (technical documentation) are the most practically relevant sections.
  • EU AI Office GPAI Code of Practice; European Commission, 2024. Ongoing guidance from the EU AI Office on GPAI compliance; updated as implementation progresses.
  • AI Act Explorer; Future of Life Institute, 2024. Interactive guide to navigating the Act; useful for initial classification work.
✏ Suggest an edit on GitHub

EU AI Act & Governance, Risk, and Compliance β€” Check your understanding

Q1

Your company uses GPT-4 to power a customer service chatbot for a retail e-commerce site. A lawyer asks whether this is a 'high-risk' AI system under the EU AI Act. What is the correct answer?

Q2

Your HR team wants to deploy an AI tool that analyses CV text to rank candidates. What risk tier does this fall under, and what is the primary compliance implication?

Q3

You deploy the same LLM-based document summarisation tool in two contexts: (A) summarising internal meeting notes, and (B) summarising court evidence for use by judges. How does the risk classification differ?

Q4

The EU AI Act's prohibited AI provisions took effect in February 2025. Which of the following would be prohibited under Article 5?

Q5

Your organisation is based in the United States and serves no EU customers. You use an AI hiring tool that would be classified as high-risk under the EU AI Act. Do the Act's requirements apply to you?