Background
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence establishes a harmonised legal framework for the development, placing on the EU market and use of AI products and services. The requirements of the Regulation apply primarily to suppliers of artificial intelligence systems, whether located in the EU or in a third country, who place artificial intelligence systems on the EU market and to users of artificial intelligence systems located in the EU.
As these requirements for artificial intelligence are laid down in a Regulation, they are directly applicable in the Member States. They do not need to be transposed into national legislation.
Objective
The objective of the AI Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
Definitions
The Regulation defines terms such as "AI system", "risk”, "placing on the market", “putting into service”, “reasonably foreseeable misuse”, "instructions for use", “safety component”, “conformity assessment”, “CE marking”, “harmonised standard”, “validation data”, “biometric data”, “personal data”, “non-personal data”, “emotion recognition system”, “profiling”, “serious incident”, “testing in real-world conditions”, “informed consent”, “provider”, “downstream provider”, etc.
Scope
The Regulation applies to various actors who place AI systems on the market, put them into service or use them in the EU, regardless of whether they are established or located in the EU or in a third country. Such actors are providers, deployers, importers, distributors, product manufacturers and authorised representatives of providers.
The Regulation does not apply to specific AI systems, such as systems designed exclusively for military, defence or national security purposes, or systems specifically developed for the sole purpose of scientific research and development.
Contents
The AI Regulation addresses the risks associated with specific uses of AI, categorising them into four levels of risk. The compliance requirements vary accordingly.
- Minimal or no risk: this includes applications such as AI-powered video games or spam filters. The vast majority of AI systems that are currently in use in the EU fall into this category. The Regulation allows the free use of AI with minimal risk.
- Limited risk: providers of limited-risk AI systems such as chatbots must follow basic transparency obligations such as informing users that their content was generated by AI and providing appropriate labels for deep fakes. Providers also have to ensure that AI-generated content is identifiable.
- High-Risk AI Systems: these systems must meet strict requirements and obligations before they can be placed on the EU market. These include rigorous testing, conformity assessment, transparency and human supervision. High-risk systems include AI applications in critical areas like biometric identification, law enforcement, healthcare, transportation, and critical infrastructure, among others.
- Unacceptable Risk AI Systems: AI systems considered a clear threat to the safety, livelihoods and rights of people are banned from use in the EU. These include cognitive behavioural manipulation, predictive policing, emotion recognition in the workplace and educational institutions, and social scoring.
High-risk systems require a conformity assessment to ensure compliance with EU standards before entering the market. This includes testing for safety, accuracy, cybersecurity, and bias mitigation. Conformity assessments may be self-assessments or require third-party verification. Providers of high-risk systems must implement measures to ensure data quality, address biases, and maintain transparency in data sourcing and processing. High-risk AI systems must be designed with mechanisms for human oversight to prevent misuse or unintended outcomes. Providers and users of high-risk AI systems are required to perform ongoing monitoring of these systems to detect and mitigate risks post-deployment.
Enforcement will be carried out by national authorities in EU Member States, coordinated by the European AI Board. The European AI Board will also ensure consistent application of the AI Regulation across Member States and provide guidance, facilitate cooperation and monitor regulatory developments.
The Regulation applies from 2 August 2026. However, the general provisions of the Regulation already apply from 2 February 2025 (chapters I and II).
Annexes
- Annex I List of Union harmonisation legislation
- Annex II List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii)
- Annex III High-risk AI systems referred to in Article 6(2)
- Annex IV Technical documentation referred to in Article 11(1)
- Annex V EU declaration of conformity
- Annex VI Conformity assessment procedure based on internal control
- Annex VII Conformity based on an assessment of the quality management system and an assessment of the technical documentation
- Annex VIII Information to be submitted upon the registration of high-risk AI systems in accordance with Article 49
- Annex IX Information to be submitted upon the registration of high-risk AI systems listed in Annex III in relation to testing in real world conditions in accordance with Article 60
- Annex X Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice
- Annex XI Technical documentation referred to in Article 53(1), point (a) — technical documentation for providers of general-purpose AI models
- Annex XII Transparency information referred to in Article 53(1), point (b) — technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system
- Annex XIII Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51
Read the full text of Regulation 2024/1689/EU on artificial intelligence
Further information: EU Commission Shaping Europe’s digital future AI Act