What exactly is an “AI System” as defined by the EU AI Act?


The European Union has been at the forefront of regulating artificial intelligence (AI), culminating in the pioneering EU AI Act. This legislation is a significant step towards ensuring that AI technologies are developed and used in a way that is safe, transparent, and accountable. A critical aspect of this regulatory framework is how it defines an "AI System."

So, what exactly constitutes an AI System under the new EU AI Act?

Definition of an AI System under the EU AI Act

According to the EU AI Act, an "AI System" encompasses a wide range of technologies that are designed to function with varying degrees of autonomy. Specifically, the Act defines AI Systems as software that is developed with machine learning approaches, logic- and knowledge-based approaches, or statistical approaches, capable of generating outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

This broad definition includes everything from simple algorithms used in sorting data to complex machine learning models driving autonomous vehicles. The key criterion is the system's ability to perform tasks traditionally requiring human intelligence, such as understanding natural language, recognizing patterns, and making decisions.

Categories of AI Systems under the EU AI Act

The EU AI Act classifies AI Systems into different categories based on their risk level, from minimal risk to unacceptable risk. This categorization dictates the regulatory requirements for each system, with a particular focus on high-risk AI Systems.

High-risk categories include AI applications:

  • Critical infrastructure

  • Employment

  • Essential private services

  • Law enforcement

  • Migration management

  • Justice and democratic process administration

These high-risk systems are subject to strict compliance and transparency requirements to mitigate potential risks to health, safety, and fundamental rights.

Implications for Developers and Businesses

The broad definition of an AI System under the EU AI Act has significant implications for developers and businesses. Entities involved in the creation or deployment of AI Systems within the EU must be acutely aware of the classification of their systems and the associated regulatory obligations. Compliance may involve rigorous documentation, risk assessment, and transparency measures to ensure that the AI Systems meet the established standards for safety and ethical considerations.

Developers and businesses must navigate these requirements carefully, balancing innovation with compliance. While this may pose challenges, particularly for smaller entities, it also encourages the development of responsible AI technologies that earn public trust and stand the test of regulatory scrutiny.

Konfer’s “cheat sheets” for EU AI Act compliance

Understanding what constitutes an "AI System" and maintaining strict compliance under the new EU AI Act is crucial for anyone involved in the development, deployment, or use of AI technologies within the European Union.

In line with this need, Konfer has developed a series of “cheat sheets” — the Konfer Control Questions Catalog — by analyzing the EU AI Act and abstracting its mandates into actionable control questions. Using Konfer’s actionable control questions, you can obtain comprehensive insights into your AI risks, risk posture, and risk mitigation measures.

The Konfer EU AI Act Control Questions Catalog is now available for download individually or as a package at the Konfer Store.

Select an available coupon below