European lawmakers have voted to delay key elements of the EU AI Act, the bloc’s flagship regulation for regulating synthetic intelligence, whereas additionally backing proposals to ban nudify apps.
The measures, accepted by a big majority within the European Parliament, would push again compliance deadlines for builders of high-risk AI programs — these deemed to pose a “critical threat” to well being, security, or elementary rights — till December 2027. Corporations growing AI programs coated by sector-specific security guidelines like toys or medical units would have even longer to conform, with a proposed deadline of August 2028. Guidelines requiring suppliers to watermark AI-generated content material would even be delayed till November 2026. All of those measures had initially been set to take impact this August.
Members additionally backed proposals to incorporate a ban on nudify apps within the revised AI Act. There aren’t any particulars on what this would possibly seem like, although it “wouldn’t apply to AI programs with efficient security measures stopping customers from creating such photos.” The choice follows widespread outrage within the EU over the flood of Grok’s sexualized deepfakes on X earlier this 12 months.
The vote extends a interval of uncertainty for companies working in Europe, which have already confronted delays after the EU missed its personal deadlines to publish key steering and adjusted parts of the regulation. It’s also unclear whether or not the proposed modifications might be carried out earlier than the unique August deadline, as parliament can not unilaterally change European regulation. Parliament should now negotiate with the European Council, a physique made up of ministers from all 27 member states, over the ultimate textual content.

