EU Unveils Final AI Code of Conduct to Guide Compliance with AI Act
The European Commission on July 10 released the final version of the General-Purpose AI Code of Conduct, a voluntary framework designed to help companies adhere to the EU’s landmark Artificial Intelligence Act (AI Act) set to take effect on August 2. The guidelines apply to mainstream generative AI models like ChatGPT, Gemini, Llama, and Grok, outlining requirements for transparency, copyright compliance, and systemic risk management.

Source: Images from the Internet, if there is any infringement, please contact the removal of
Developed by a panel of 13 independent experts and incorporating input from over 1,000 stakeholders—including industry leaders, researchers, and civil society groups—the code focuses on three pillars: safety and security, transparency, and copyright. It mandates detailed disclosures about model architecture, training data sources, and energy consumption, while requiring developers to implement measures to respect EU copyright laws, such as honoring opt-out requests for web scraping and mitigating risks of generating infringing content16. For advanced models deemed "systemic risk," providers must establish continuous risk assessment frameworks and report incidents to the newly formed EU AI Office.
Compliance timelines vary based on model deployment dates. Existing AI systems built before August 2, 2025, have until August 2, 2027, to align with the rules, while models developed afterward must comply by August 2, 2026. The code’s voluntary nature allows companies to reduce administrative burdens while gaining legal certainty, though non-compliant firms risk lacking regulatory clarity.
Despite requiring formal approval from EU member states and the Commission (expected by late 2025), the code reflects the bloc’s commitment to balancing innovation with accountability. The EU’s stance contrasts with recent industry pushback: in July, over 45 European tech giants, including Airbus and Siemens, sought a two-year delay on the AI Act, which the Commission swiftly rejected. Microsoft, however, has signaled alignment, with President Brad Smith emphasizing the company’s reliance on European trust amid regulatory shifts.
The code marks a critical step toward operationalizing the AI Act, which categorizes AI systems by risk levels and imposes strict oversight on high-impact applications. While transparency and copyright rules apply to all general-purpose AI providers, safety protocols target advanced models like OpenAI’s ChatGPT and Google’s Gemini, reflecting their potential societal impact. As the global AI landscape evolves, the EU’s approach—combining stakeholder collaboration with regulatory rigor—could set a benchmark for responsible AI governance worldwide....