Written by: Haim Ravia, Dotan Hammer
August 2, 2026, marks a key application date for the EU AI Act, when the regulation’s core framework becomes broadly operational.
High-risk systems under the EU AI Act
According to the EU AI Act’s Article 113, this date triggers the application of most provisions not already in force, including the comprehensive requirements for high-risk AI systems listed in Annex III. Annex III of the EU AI Act enumerates AI systems in the fields of biometrics, critical infrastructure, education and vocational training, employment, workers management and access to self-employment, essential private services and essential public services, Law enforcement, migration, asylum and border control management, and administration of justice and democratic processes. By contrast, the high-risk classification rules for AI systems embedded in products covered by EU harmonization legislation will apply from August 2027.
Key obligations taking effect in August 2026 include full requirements for the high-risk AI systems, described above. This spans requirements around risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity; deployer obligations for high-risk systems; conformity assessment procedures; post-market monitoring and incident reporting requirements; and the complete market surveillance framework. The transparency obligations under Article 50—requiring disclosure of AI interactions, labeling of synthetic content, and deepfake identification—also become enforceable in August 2026.
Transparency of AI-generated content
On December 17, 2025, the European Commission published the first draft of the Code of Practice on marking and labeling AI-generated content, a significant milestone in implementing Article 50 of the EU AI Act. This voluntary code, developed by independent experts, establishes technical standards for watermarking and detecting synthetic media ahead of the transparency obligations becoming legally binding on August 2, 2026.
The draft Code addresses two key areas: rules for providers of generative AI systems regarding marking and detecting AI content, and obligations for deployers who use AI professionally to label deepfakes and AI-generated text on matters of public interest. Providers must ensure AI-generated or manipulated content is marked in a machine-readable format that enables detection of artificial generation. The code aims to combat the proliferation of sophisticated deepfakes and AI-driven misinformation while providing operational clarity for compliance. The Commission is seeking stakeholder feedback, with the final code expected by June 2026.
Spain’s AI guidance
Meanwhile, Spain’s Agency for the Supervision of Artificial Intelligence (AESIA) has released an extensive suite of 16 guidance documents (in Spanish) to support organizations in complying with the EU AI Act. Developed through Spain’s AI regulatory sandbox pilot in collaboration with technical experts and potential national authorities, these guides provide practical, non-binding recommendations aligned with regulatory requirements.
The guidance is organized into three categories: introductory guides (providing an overview of the regulation and practical examples), technical guides covering specific compliance requirements, and a checklist manual with accompanying templates. The technical guides address all core high-risk AI system obligations, including conformity assessment procedures, quality management systems, risk management, human oversight, data governance, transparency, accuracy, robustness, cybersecurity, record-keeping, post-market surveillance, incident management, and technical documentation. AESIA notes these documents remain subject to ongoing evaluation and will be updated following approval of the Digital Omnibus amendments to the AI Act.
Click here to read the EU Commission’s First Draft Code of Practice on Transparency of AI-Generated Content.
Click here to read the Spanish AESIA’s guidance documents (in Spanish).