The European Commission has published a proposal to amend Regulation (EU) 2024/1689 (the AI Act). The proposal, known as the “Digital Omnibus on AI“, introduces targeted measures that aim to simplify compliance, address implementation challenges, and reduce administrative burdens.
The proposal follows stakeholder consultations that identified risks around the timely availability of harmonized standards and the designation of competent authorities.
Here’s a look at the key AI Act amendments proposed in the Digital Omnibus.
Linking compliance deadlines to standards availability
The AI Act envisioned technical standards that companies can follow to comply with the law. But these standards are running very late.
The Commission proposes a way to pause the deadline: The strict rules for high-risk AI won’t apply until the official guidance on how to comply is actually ready.
The new timeline mechanism operates as follows:
- The Commission must adopt a decision confirming that adequate measures (harmonized standards, common specifications, or guidelines) are available.
- Obligations for high-risk AI systems classified under Article 6(2) and Annex III apply 6 months after this decision.
- Obligations for high-risk AI systems classified under Article 6(1) and Annex I apply 12 months after this decision.
- Default backstop dates remain: 2 December 2027 for Annex III systems and 2 August 2028 for Annex I systems.
Expanded scope for bias detection
The proposal replaces Article 10(5) with a new Article 4a to broaden the legal basis for processing special category data (such as ethnicity or health) for bias correction.
Currently, this exemption applies only to providers of high-risk AI systems. The new rule extends this permission to:
- Providers of non-high-risk systems and general-purpose AI models.
- Deployers (organizations using the AI).
This amendment acknowledges that bias can occur in any system, not just those classified as high-risk.
Organizations relying on this basis must still adhere to strict safeguards, including:
- Using pseudonymization and state-of-the-art security.
- Deleting the data immediately once the bias is corrected.
- Ensuring the data is never transferred to third parties.
Extension of privileges to small mid-caps
The proposal extends regulatory simplifications currently reserved for SMEs to “small mid-cap enterprises” (SMCs).
SMCs are defined based on the Annex to Commission Recommendation (EU) 2025/1099. They may provide technical documentation in a simplified manner (Article 11). They may also comply with quality management system requirements in a simplified manner (Article 17).
Member States must consider the economic viability of SMCs when imposing penalties, and SMCs gain priority access to AI regulatory sandboxes.
Centralized oversight and the AI Office
The proposal reinforces the competence of the AI Office regarding specific types of AI systems. The AI Office becomes exclusively competent for the supervision and enforcement of:
- AI systems based on a general-purpose AI (GPAI) model where the model and system are developed by the same provider (excluding systems related to products in Annex I).
- AI systems that constitute or are integrated into a designated very large online platform (VLOP) or very large online search engine (VLOSE).
The proposal empowers the Commission to:
- Conduct pre-launch compliance checks on these systems.
- Set out clear enforcement procedures and fine structures.
- Use standard market surveillance powers adapted from existing product safety laws.
Changes to AI literacy and registration obligations
The proposal modifies the AI literacy obligation in Article 4.
The direct obligation on providers and deployers is removed. The obligation is shifted to the Commission and Member States to “encourage” providers and deployers to ensure sufficient AI literacy.
Registration requirements in the EU database are also amended.
Providers who conclude that their Annex III system is not high-risk (under the Article 6(3) derogation) are no longer required to register the system. The provider must still document the assessment and provide it to competent authorities upon request.
Sandboxes and real-world testing
The proposal expands the scope and governance of AI regulatory sandboxes and real-world testing (RWT).
- The AI Office may establish an EU-level AI regulatory sandbox for systems under its supervision (operational from 2028).
- RWT outside sandboxes is extended to high-risk AI systems covered by Union harmonization legislation in Annex I (previously limited to Annex III).
- Voluntary RWT agreements may be concluded between Member States and the Commission for products listed in Annex I, Section B.
- Testing plans for RWT within a sandbox are integrated into a single document.
Post-market monitoring
Amendments to Article 72 would remove the requirement for the Commission to adopt an implementing act establishing a harmonized template for the post-market monitoring plan.
This would mean that providers have greater flexibility to tailor monitoring plans to their organization. The Commission must adopt guidance on post-market monitoring plans instead of a prescriptive template.
Conformity assessment bodies
To expedite the designation of notified bodies, the proposal introduces a “single application” procedure.
Conformity assessment bodies designated under Union harmonization legislation (Annex I, Section A) may submit a single application. This allows for a single assessment procedure for designation under both the AI Act and the relevant sector-specific legislation.
Notified bodies already designated under sector-specific legislation must apply for designation under the AI Act within 18 months of entry into application.
Transitional periods for generative AI
A new paragraph in Article 111 addresses general-purpose AI systems generating synthetic content.
Article 50(2) requires providers to watermark synthetic content, ensuring it is detectable as AI-generated. This provision appeared to apply to AI systems already on the market without any transition date.
Under a proposed amendment, these providers of such systems placed on the market before 2 August 2026 must comply with this requirement by 2 February 2027.
Preparing for change (or not)
The Digital Omnibus package is at the earliest stage of legislative development and must be approved by the European Parliament and Council before taking effect.
These proposals would provide relief for some high-risk AI system providers – but they might not happen, or they could change significantly before becoming law.
As such, AI operators should keep an eye on the Commission’s proposals but continue to comply with the law as it exists today.