Regulatory Framework for EU AI Act Still Unresolved
European Union (EU) representatives of parliament members and other EU member countries met in Brussels for twelve hours to negotiate on how to soften the new EU landmark AI Act. This resulted in the collapse of the negotiations because the negotiators were unable to reach a consensus on which industrial sectors are currently being governed by sectoral safety rules and therefore should be excluded from the high-risk obligations of the AI Act. As reported by Reuters, the meeting lasted all day long from the morning into the night when each side agreed to walk away without a negotiated agreement.
As a result of the failure to negotiate an agreement, companies are still required to comply with the provisions of the AI Act as of August 2026, although some critical elements of the act have yet to be clarified. The AI Act was passed in March 2024 as the first broad regulation on artificial intelligence. Under this act, companies that deploy high-risk AI systems will be required to perform conformity assessments of such systems, maintain records regarding the design and operation of these systems, and develop and implement risk management programs. However, it is currently unclear who falls within the definition of "companies."
The main issue at dispute is regulatory duplication. Member states argued that companies that fall under product safety regulations should also not be subject to the AI Act's additional regulations. Members of Parliament argued that product safety regulations do not take into account specific AI-related dangers such as bias magnification or the lack of transparency regarding the reasoning behind decisions made by AI systems.
A Dutch representative told Inshorts after the breakdown that "big tech may celebrate, but European companies committed to safety are now facing regulatory chaos." His statement represents increasing dissatisfaction among smaller businesses that invested early in preparation for compliance.
Industry organizations asked for a two-year suspension of the regulatory requirements so they can maintain competitive advantage during the process of clarifying regulatory ambiguity. Organizations representing consumers' interests stated that a modification of the act would provide weaker protection against potentially hazardous uses of high-risk AI applications in hiring practices, credit evaluations and law enforcement.
Because negotiations are scheduled to continue in May, companies will need to assume that the original obligations of the AI Act will apply as defined. According to Legal Nodes, by August 2025 companies will be required to demonstrate the following: technical documentation demonstrating that their AI systems meet certain minimum requirements; risk management programs that identify and manage potential risks created by AI systems throughout their entire operational lifetime; and conformity assessments completed by approved third parties for high-risk applications.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
Classification errors carry serious penalties. If companies classify their AI systems incorrectly, according to Legal Nodes, they may be subject to mandatory recall of products containing such systems, suspension of deployment of such systems, and large-scale litigation risks. Penalties will depend upon definitions provided in the final version of the regulatory guidelines.
AI Startups in Europe are most negatively impacted by the uncertainty. Many have already initiated compliance activities based on drafts of applicable guidelines, however, the guidelines may shift. Competitors using AI in non-regulated environments in either the United States or China will have no comparable regulatory obstacles.

This is not the first time European Union technology regulation has been delayed due to implementation detail disputes. The General Data Protection Regulation (GDPR) faced similar debates in 2016 concerning data retention limits versus scope issues. The key difference today is that deployments of AI occur much faster than traditional software development cycles. Therefore, regulatory ambiguity directly affects deployment timelines.
Parliamentarians appear reluctant to agree to exemptions because of recent high profile failures involving AI systems. Negotiating positions indicate concern with reliance on old safety regulations to govern rapidly evolving AI technologies.
Until scope is resolved, companies operating in regulated areas must adhere to dual compliance obligations. The deadline to begin compliance in August 2026 is unchanged. Technical documentation and conformity assessments must be performed pursuant to the most restrictive interpretation of applicable provisions. Smaller European companies incur greater costs associated with compliance relative to multinational technology corporations. Negotiations resume in May as deadlines mount for resolving issues related to regulatory clarity prior to implementation challenges occurring in the summer.
Both industry associations and consumer advocacy groups are applying pressure from opposite directions. Industry associations assert that regulatory ambiguity will cause AI work to be shifted outside Europe. Advocates for consumers contend that diluting the Act will undermine several years of efforts toward creating international governance standards for AI.
The EU presented its AI Act as a model for democratically governing AI, distinguishing itself from China's government led-approach and America's market-based governance structure. If a weakened compromise is reached, it could indicate that even Europe's ambition for regulatory leadership is yielding to competitive pressures and leaving unanswered the question of who ultimately governs use of AI systems.
