EU Issues Guidelines for High-Risk AI Models Ahead of AI Act Deadline

The European Commission on Friday released a set of guidelines aimed at helping companies develop and operate artificial intelligence (AI) models that fall under the European Union's Artificial Intelligence Act (AI Act). These guidelines are especially focused on AI models deemed to carry “systemic risks,” offering them a clearer path to compliance as the August 2 enforcement date approaches.
The goal is to address growing concerns among technology firms about the complexity and weight of regulatory obligations. Companies failing to meet these standards may face fines ranging from €7.5 million or 1.5% of annual turnover, to €35 million or 7% of global turnover, depending on the nature and extent of the violation.
The AI Act: What It Means and Who It Targets
The AI Act, which officially became law in 2024, will begin enforcement on August 2, 2025, specifically targeting high-impact AI systems and foundation models such as those created by OpenAI, Google, Meta Platforms, Anthropic, and Mistral. These models are considered to have systemic risks due to their computing capabilities and potential societal impact.
The Commission defines models with systemic risks as those whose operations could affect public health, safety, fundamental rights, or society at large. Because of their powerful functionalities, they are expected to meet tougher obligations compared to other AI systems.
New Requirements for High-Risk AI Models
The Commission’s guidelines detail several key requirements for AI systems categorized under systemic risk:
- Model Evaluation: Companies must conduct thorough assessments of their AI models to understand potential societal impacts.
- Risk Mitigation: Businesses are required to take proactive steps to reduce potential harms caused by the deployment of these models.
- Adversarial Testing: AI models must be tested for vulnerabilities and robustness under challenging conditions.
- Incident Reporting: Any serious incidents related to the AI model’s operation must be reported to the Commission.
- Cybersecurity Measures: Strong protections must be in place to prevent unauthorized access, theft, or misuse of AI models.
These stipulations aim to ensure that powerful AI systems do not pose undue risks to the public or create unintended harms.
Foundation Models and Transparency Rules
In addition to systemically risky AI, the Act also imposes requirements on general-purpose AI (GPAI) or foundation models, large-scale AI systems that can be adapted to various tasks.
These models must adhere to transparency rules, which include:
- Technical Documentation: Companies must maintain clear and comprehensive documentation explaining how the model functions.
- Copyright Compliance: Firms must adopt policies that respect intellectual property rights during model training.
- Content Disclosure: Developers need to provide summaries outlining the types of data used to train the AI systems.
These measures are designed to enhance public trust and ensure responsible AI development.
Commission's Support and Forward Outlook
“With today's guidelines, the Commission supports the smooth and effective application of the AI Act,” said Henna Virkkunen, EU’s tech chief, in a statement.
The publication of these guidelines is intended to ease the transition into the AI Act’s enforcement phase and help businesses align with its framework. By offering this clarity, the Commission aims to balance innovation with ethical and legal responsibility.
Business News
Georgia Hyundai Plant ICE Raid Detains 475, Raises Labor Concerns
How Do Leaders Deal With the Weight of Leadership Responsibilities?
Trump’s Executive Order Aims to Redefine 401(k)s With Big Gains and Even Bigger Risks
Palantir Breaks Records as AI Earnings Weather Trump's Tariff Shock
Union Pacific and Norfolk Southern Move Toward Megamerger to Build U.S. Transcontinental Railroad