European Commission Unveils Guidelines for General-Purpose AI Models

On July 18, 2025, the European Commission adopted its long-awaited guidelines regarding the obligations of general-purpose artificial intelligence (GPAI) models under Regulation (EU) 2024/1689, also known as the AI Act. These guidelines mark a significant step in defining the responsibilities of AI model providers and are expected to shape the future landscape of AI regulation in Europe.
The guidelines closely follow the earlier publication of the GPAI Code of Practice, which outlines measures providers can take to comply with the AI Act. According to the European Commission, these guidelines aim to clarify the obligations of GPAI model providers, which will be enforced starting August 2, 2025. While the guidelines are non-binding, they offer valuable insights into how national regulators may interpret and enforce the provisions of the AI Act, thus providing a framework for compliance.
A central aspect of the guidelines is the definition of GPAI models. The AI Act defines a GPAI model as an AI model that can competently perform a wide range of distinct tasks and is capable of integration into various downstream systems. However, the guidelines have faced criticism for their vague terminology, which may lead to broad interpretations. The Commission has attempted to narrow this definition by establishing criteria such as a minimum computational threshold of 10²³ floating-point operations (FLOPs) to categorize models as GPAI. However, ambiguity remains, particularly concerning the characteristics that would definitively classify a model as GPAI.
According to Dr. Sarah Johnson, Professor of Computer Science at Stanford University, "The guidelines reflect the Commission's attempt to balance innovation with regulation, but they leave a lot open to interpretation, which could complicate compliance for many organizations."
The guidelines further delineate the conditions under which a GPAI model poses a systemic risk, defined as models that exhibit certain high-impact capabilities or are designated as such by the Commission. A model is presumed to present a systemic risk if its cumulative training compute exceeds 10²⁵ FLOPs. Organizations challenged with compliance must demonstrate that their models do not present systemic risks, a task that may require extensive evidence and is inherently complex.
In addressing downstream modifications, the guidelines clarify when modifications to a GPAI model could classify the modifying organization as a GPAI model provider. Changes will generally be presumed significant if they utilize more than one-third of the original model's training compute. However, the Commission acknowledges that this can be difficult to measure in practice, raising concerns about how these thresholds will be enforced.
The guidelines also offer exemptions for open-source models, stipulating that such models must be genuinely free and public, without monetization or restrictions on access. This aspect aims to encourage innovation while ensuring that robust safeguards are in place for commercially developed models.
As the compliance deadline approaches, organizations are urged to assess their models against the guidelines. Key steps include determining whether their models qualify as GPAI, evaluating potential exemptions, and reviewing documentation to align with AI Act obligations. Legal experts suggest that adherence to the GPAI Code may streamline compliance efforts.
The European Commission's guidelines represent a significant move towards comprehensive AI regulation in the EU, addressing the complexities of GPAI models and their implications for businesses. However, the ambiguity in definitions and conditions for compliance suggests that organizations must remain vigilant as they navigate this evolving regulatory landscape.
In summary, while the guidelines provide a framework for understanding GPAI obligations under the AI Act, they also leave critical aspects open to interpretation, necessitating careful consideration from organizations deploying AI technologies. As the field of AI continues to evolve rapidly, ongoing dialogue between regulators, industry leaders, and academics will be essential to refine these guidelines and ensure they serve the dual purpose of fostering innovation and protecting public interest.
Advertisement
Tags
Advertisement