The EU AI Act: The European Union's AI law

Details and background on the implementation requirements

The aim of the AI Act is to improve the functioning of the European internal market and promote the adoption of human-centered and trustworthy artificial intelligence (AI) , while ensuring a high level of protection of health, safety, and the fundamental rights enshrined in the Charter of Fundamental Rights – including democracy, the rule of law, and environmental protection – against potentially harmful effects of AI systems. The AI Act is therefore also a product safety regulation. It is intended to protect European consumers from violations of fundamental rights resulting from the inappropriate use of AI. Suppliers of AI systems classified as high-risk will in future be required to verify and formally confirm their compliance with numerous requirements in line with the principles of trustworthy AI – from AI governance to AI quality. Violations of these requirements can result in significant fines. Furthermore, suppliers can be forced to withdraw their AI systems from the market. Despite extensive principles, rules and procedures as well as new supervisory structures, the law is not intended to slow down innovation in the EU, but rather to promote further development in the AI field, especially by start-ups and SMEs, through legal certainty and regulatory sandboxes.

 

What is considered AI

The definition of AI in the AI Act is based on the internationally recognized OECD definition of AI. An AI system, according to the AI Act, is "a machine-based system designed to operate autonomously to varying degrees, capable of adapting once operational, and capable of deriving outputs from inputs received for explicit or implicit purposes, such as predictions, content, recommendations, or decisions that can affect physical or virtual environments."


An AI system under EU law is characterized by its ability to reason, meaning it can make predictions and decisions that influence real and virtual environments. This is made possible by techniques such as machine learning and logic-based approaches. AI systems vary in their autonomy and can be used either independently or integrated into products, where they can adapt independently through use. The definition of AI in the AI Act is quite broad, which means that a large number of systems can fall under the regulation. However, the recitals explaining the regulatory texts of the AI Regulation, as well as the EU Commission's guideline on the definition of an AI system published in early February 2025, clarify that the definition does not cover AI systems that involve mathematical optimization, simple data processing, classical heuristics, or simple predictions (e.g., based on statistical calculation rules). It also identified seven elements of an AI system, but also clarified that not all of these elements need to be present throughout the entire life cycle of an AI system in order to qualify as an AI system under the AI Act.


Who the EU AI Act affects

The AI Act stipulates that the regulation applies exclusively to use cases that fall within the scope of EU law. It does not in any way restrict the responsibilities of member states or government authorities with regard to national security. Furthermore,  AI systems that serve exclusively military or defense purposes  , are used solely for  research and innovation, are based on open source licenses , and are used for purely private purposes.


The AI Regulation applies to all  providers  of AI systems offered on the European market. The term "provider" includes individuals or entities that develop and market an AI system. Importers, distributors, and operators are also subject to regulation.


General Purpose AI Models (GPAI)

As with "traditional" (focused, purpose-oriented) AI models, the AI Act also classifies base models—the engines behind Generative AI—based on their risk. They are  designated "General Purpose AI"  (GPAI) thanks to their flexibility and potential for widespread use. 


The AI Act provides for the following classification: It is not based on the  application , but on the  performance and reach  of the underlying base model.  

  • Level 1: General Purpose AI Models (GPAI):  All models must meet additional  transparency requirements  . These include technical documentation and detailed disclosures regarding the use of proprietary training data, as well as the aforementioned requirements for labeling content generated using AI. 
  • Level 2: GPAI with significant impact:  For “very powerful” AI models that  may pose systemic risks  , additional obligations apply, for example regarding monitoring of serious incidents, model evaluation and conducting attack tests. 


The quantitative, non-subjective distinction between  GPAI models  and  GPAI models with systemic risk  is made based on the  computational power required to train the underlying base model. This is measured in floating-point operations per second (FLOPs). The threshold for GPAI models with systemic risk is 10^25 FLOPs. 


To meet these new requirements in practice, experts from industry, academia, civil society and other relevant stakeholders will work with the Commission to develop  codes of conduct  – and ultimately harmonised EU-wide  standards  .


AI systems (use case-oriented)

The AI Act takes a risk-based approach to classifying AI systems  . Therefore, not all AI systems are treated equally. First, it distinguishes between "traditional" AI systems and "general purpose AI" (GPAI). The latter is a relatively new development since the emergence of generative AI systems and is treated as a separate topic (see below). 


The risk of so-called  single-purpose AI  (AI with a specific purpose) is assessed not based on its technology but on its use case. Risk categories range from "unacceptable" to "high" to "limited or minimal." Systems with unacceptable risk are prohibited, while those with minimal risk are not subject to regulation under the AI Act.


Prohibited AI systems

AI systems that pose an  unacceptable risk will be completely banned  within just six months of the AI Regulation's official adoption  . The AI Act lists the following applications: 


  • Biometric categorization systems based on sensitive characteristics such as political opinion, religious or philosophical beliefs, sexual orientation or ethnic origin
  • So-called biometric real-time remote identification systems,  however, are permitted. They are classified as high-risk and subject to strict regulations: 
    • Time and location restrictions
    • For the targeted search for victims (e.g. in cases of kidnapping or human trafficking)
    • To avert the concrete and immediate danger of a terrorist attack
    • To detect or identify an offender or suspect of a serious crime within the meaning of the Regulation
  • Untargeted capture of facial images from the Internet or surveillance cameras to create a facial recognition database
  • Emotion recognition in the workplace and educational institutions 
  • Social scoring systems that rate people based on their social behavior or personal characteristics
  • Systems that manipulate people's behavior and impair their free will
  • Applications that exploit the weaknesses of certain groups of people , particularly due to age, disability or socioeconomic status 

The EU Commission also provided guidance on this issue in early February 2025 with a guideline on prohibited practices. This guideline provides numerous examples of all practices prohibited under the AI Act and distinguishes them from high-risk use cases. At the same time, it makes clear that the distinction between prohibited and high-risk use cases can be very narrow. Therefore, a careful case-by-case assessment is essential. 


High-risk AI systems – the focus of regulation

The regulation's focus is therefore clearly on high-risk AI systems, which are subject to a wide range of compliance requirements. Providers of such systems are required, for example, to implement a quality management system and a risk management system and to meet data quality and integrity requirements. They must also conduct a  conformity assessment  and subsequently issue a   declaration of conformity  . High-risk AI systems are divided into two categories:


  • Systems for  products subject to EU safety regulations – such as machinery, toys, aerospace and automotive equipment, medical devices and lifts – must be subject to third-party conformity assessment. 
  • Suppliers of systems in the areas listed  in Annex III  conduct the conformity assessment themselves. These include critical infrastructure, education, employment, essential private and public services (including financial services), law enforcement, migration/asylum/border control, and democratic processes (elections). However, special regulations have been established for the use of remote biometric identification (RBI) systems, for example, to combat certain crimes. 


Before high-risk AI systems are brought onto the market, for example in the public sector, banking or insurance, a  Fundamental Rights  Impact Assessment must also be carried out.


Citizens have the right to lodge complaints with national authorities about AI systems and algorithmic decisions that affect their rights. 


The rest

There are types of AI systems that pose a limited or minimal risk and are therefore subject to fewer or no obligations under the AI Regulation:  


Administrative or internal AI systems such as spam filters or predictive maintenance systems are  outside the scope of the regulation  and therefore fall under the  "minimal risk" category  . The AI Regulation does not stipulate any explicit obligations for these AI systems. However, companies can voluntarily adhere to codes of conduct for these AI systems. 


Nevertheless, AI systems of all risk classifications that interact with people , such as chatbots or recommendation systems, are subject to certain  transparency obligations . Content generated by an AI system, such as conversations (chatbots), deepfakes, or biometric categorizations, must be labeled. These obligations are associated with classification in the  "limited risk"  class. 


conformity

To issue a declaration of conformity, providers of  high-risk AI systems must demonstrate  compliance with the regulation before  market launch  and throughout the  entire life cycle of an AI system  :

  • Quality Management System  (QMS) – Ensuring appropriate governance regarding data quality, technical documentation, record-keeping, risk management, human oversight, and model validation, taking into account the principles of trustworthy AI, in particular transparency, robustness, accuracy, and cybersecurity.
  • Risk management system - Anticipating potential AI risks, whether general or specific to each use case, designing controls and creating contingency plans, and defining responsibilities for resolving issues should they arise.
  • Lifecycle Management  – The QMS requires vendors to mitigate and manage risks not only before placing the AI system on the market, but throughout its entire lifecycle. This includes registering the high-risk AI system in the EU database and logging incidents throughout its lifecycle.  

Details can be found in our dedicated article on AI governance.


enforcement

The law generally requires providers   to monitor themselves by conducting a conformity assessment themselves or by engaging authorized third parties, depending on the type of high-risk AI system. The AI Regulation provides for an administrative structure with several central government authorities, each entrusted with different tasks related to the implementation and enforcement of the law.


At EU level

The EU  AI Office , a new authority within the European Commission, coordinates the implementation of the law in all EU member states. The AI Office also oversees GPAI, which has significant implications.


 A committee linked to the AI Office , the AI Panel, consisting of stakeholders from business and civil society, provides feedback and ensures that a broad spectrum of opinions is represented during the implementation process. 


In addition, the Scientific Panel, an  advisory forum  of independent experts, will identify systemic risks of AI, provide guidelines for model classification, and ensure that the rules and implementation of the law are in line with the latest scientific findings.


At the national level

EU member states must  establish or designate competent national authorities  responsible for enforcing the law, known as market surveillance authorities. They must also ensure that all AI systems comply with applicable standards and regulations. Their tasks include:  

  • Verification of the proper and timely implementation of conformity assessments,
  • Appointment of “notified bodies” (external auditors) authorized to carry out external conformity assessments,
  • Coordination with other supervisory authorities at national level (e.g. for banking, insurance, healthcare, automotive, etc.) as well as with the AI Office at EU level.


In Germany, according to the current draft bill for the implementation of the AI Act, the task of AI supervision is to be shared between the Federal Network Agency (for all other sectors) and BaFin (for the financial sector).


Sanctions for violations

AI systems are classified according to their risk. Similarly, the  sanctions provided for in the EU AI Act are based  on the severity of the violation: 

  • Violations relating to prohibited AI systems (Article 5) can cost providers up to  EUR 35 million or 7 percent  of their global annual turnover in the previous year, whichever is higher. 
  • For violations of various other provisions (e.g. Articles 10 or 13), fines of up to  EUR 15 million or 3 percent  of annual turnover can be imposed.
  • Incorrect reports can be punished with a fine of up to  EUR 7.5 million or 1 percent  of annual turnover, whichever is higher. 
  • In addition to these monetary penalties, national supervisory authorities can force providers to withdraw non-compliant AI systems  from the market or prohibit the provision of services . 

The AI Regulation provides for  more moderate fines for SMEs and start-ups  .


Although the provisions on sanctions will not apply until August 2025, the provisions on prohibitions that have been in force since February 2, 2025, have immediate effect, meaning that those affected could enforce them before the national courts and obtain interim injunctions.


Timetable of the regulation

The AI Act entered into force on August 1, 2024, and has staggered implementation deadlines. Almost all of the AI Act will be applicable  by August 2, 2026.  However, some provisions will apply earlier, with high-risk AI systems under Annex II following a three-year transition period: 

  • February 2, 2025: AI systems with  unacceptable risk  are prohibited. 
  • August 2, 2025: The general-use AI and sanctions regulations   apply. 
  • August 2, 2026: The remaining provisions on  high-risk AI systems (Annex III) , transparency requirements and AI real-world laboratories apply. 
  • 02 August 2027: Rules for  GPAI that was already on the market before the regulation came into force

Use AI systems - but safely

Even though not all technical details have been clarified yet, the AI Act provides a sufficient impression of the scope and objective of the future regulation. Companies will need to adapt many internal processes and strengthen risk management systems. The European standardization body CEN-CENELEC will translate the principles of the AI Regulation into technical norms and standards to facilitate the testing and certification of AI systems, and the EU Commission will publish guidelines to provide guidance on the application of the AI Act. However, companies can build on existing processes and learn from measures taken under previous laws such as the GDPR. We recommend that companies drive implementation within their organization and raise awareness of the new law among their employees, conduct an inventory of their AI systems, ensure appropriate governance measures, and meticulously review AI systems classified as high-risk.  


At Deloitte, we support our clients in this process: We help them master the complexity and scope of the AI Regulation and prepare for future requirements. Benefit from Deloitte's thought leadership in  trustworthy AI , our comprehensive expertise in  developing AI systems,  and our many years of experience as  an audit firm . Our services are based on the six  lifecycle phases of AI systems , which are also described in the AI Regulation and correspond to common practice.


Thought leadership in AI

Deloitte has extensive expertise in implementing AI-based solutions, as well as the meticulous development  of dedicated audit tools and monitoring methods for evaluating AI models  according to the principles of trustworthy AI. Our reputation as a competent consulting firm is based in particular on our demanding quality standards. Completing a questionnaire is far from sufficient to assess the compliance of your systems. Deloitte conducts  in-depth quantitative analysis  and thoroughly tests your AI models to identify logic errors, methodological inconsistencies, implementation issues, data risks, and other weak points. We believe that only such a thorough approach can meet our clients' needs. However, this does not mean reinventing the wheel for every analysis. In the interest of efficiency, Deloitte has invested in the development of dedicated tools to  streamline the numerous steps of the validation process . A series of white papers (download below) explains why quality assurance and governance mechanisms are critical to strengthening our trust in the AI models and systems that are shaping our present and future. 



Data Protection and Privacy Policy ("Policy") outlines how the AISHE (Artificial Intelligence System Highly Experienced) operates with respect to data protection principles. This Policy applies to all users of the AISHE system worldwide and demonstrates our commitment to privacy protection in compliance with the European Union's General Data Protection Regulation (GDPR/DSGVO) and other international data protection standards. Further details at: Data Protection





#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!