EU negotiators have reached a provisional political agreement on changes that, if formally adopted by the EU, would simplify the AI Act and make it easier for businesses to comply. The AI Act is the EU’s new law regulating the development and use of AI.
What has happened?
The EU institutions have agreed a political deal on a package of AI Act amendments known as the Digital Omnibus on AI, part of the EU’s broader “Omnibus VII” simplification package. The aim is to reduce complexity, avoid some overlapping regulation, and give companies more time to comply while detailed standards and guidance are finalised.
The main practical point is that, under the provisional agreement, the compliance date for many “high-risk” AI systems would move from 2 August 2026 to 2 December 2027. For high-risk AI systems used as safety components in regulated products, the proposed application date would move to 2 August 2028.
This is not yet final law. The deal still needs to be formally adopted by both the Council and the European Parliament before it takes effect. The EU institutions say they intend to adopt it before 2 August 2026, which is the current start date for many high-risk AI rules.
What does “high-risk” mean?
The AI Act does not treat all AI systems in the same way. It puts stricter rules on “high-risk” AI systems because they are used in areas where mistakes or misuse could affect people’s rights, safety or access to important opportunities.
Examples include certain AI systems used in recruitment, employee management, education, credit scoring, access to essential services, biometrics, law enforcement and border management. They can also include certain AI systems used as safety components in products covered by EU sectoral safety legislation, such as medical devices, machinery, toys and lifts. These systems can be subject to more detailed requirements, including risk management, data governance, technical documentation, human oversight, accuracy, cybersecurity and monitoring.
What are the other changes?
- AI-generated content: The deadline for certain machine-readable marking or watermarking obligations relating to AI-generated content would become 2 December 2026. That is a short delay from the original 2 August 2026 date, but earlier than the 2 February 2027 date proposed by the Commission.
- Child abuse and non-consensual intimate content: The package introduces prohibitions on AI systems used to generate child sexual abuse material and non-consensual intimate content. This includes placing such systems on the EU market for that purpose, placing them on the market without reasonable safeguards to prevent that use, and using them for that purpose.
- Product safety overlap: The package seeks to reduce overlapping requirements for certain AI-enabled regulated products, including by clarifying the position for machinery products.
- Safety components: The package narrows what counts as a “safety component”, so AI features that only assist users or optimise performance would not automatically be treated as high-risk unless their failure or malfunction creates a health or safety risk.
- Sandboxes and smaller businesses: The deadline for national AI regulatory sandboxes would move to 2 August 2027, and certain SME exemptions would be extended to small mid-cap companies.
What does this mean for businesses?
This will be welcome news for businesses that were preparing for many August 2026 high-risk AI obligations. If formally adopted, the changes should give many companies more time to build inventories, classify AI systems, allocate internal responsibility, update procurement processes, and prepare documentation needed to support compliance.
However, this does not remove the need to plan for the AI Act. Some AI Act obligations are already in force, and businesses should continue working to the existing AI Act deadlines until the amendments are formally adopted.