From Voluntary Guidelines to Enforceable Standards

How the EU’s AI Codes of Practice Shape Regulation and Markets


Matija Franklin and Philip Moreira Tomei

The European Union’s approach to AI governance is undergoing a major transition in 2025. The AI Act, approved by the EU Council on 21 May 2024, came into force in August 2024, initiating a phased regulatory implementation. Among its most immediate effects is the introduction of General Purpose AI Systems (GPAIS) obligations, which will become enforceable in August 2025, twelve months after the Act’s commencement. These obligations apply to developers of models such as GPT-4o, requiring compliance with transparency, risk assessment, and mitigation measures. AOI has been participating in the creation of Codes of Practice (CoP) for General Purpose AI Systems. 

While the AI Act establishes a unified framework for AI regulation across the EU, several member states are developing national AI strategies. Among these, France has taken a particularly proactive stance, with two key initiatives: The Center for AI Evaluation, which focuses on assessing the capabilities of AI models rather than designing new safety evaluations. In addition, the AI Action Summit, a rebranding of the AI Safety Summits previously hosted in the UK and South Korea. This summit expands the focus beyond safety, addressing five thematic areas: AI for Good, AI Ecosystem, AI Security and Safety, AI Global Governance, and AI’s Impact on the Workforce.

In the United Kingdom, the Labour government has signalled a commitment to AI policy reforms. Peter Kyle, the newly appointed Secretary of State for Science, Innovation, and Technology (DSIT), has announced plans to put the AI Security Institute (AISI) on a statutory footing and mandate AI labs to release safety data. The government has also created a Regulatory Innovation Office to expedite decision-making on AI policy and compliance.

To oversee and facilitate the implementation of the EU AI Act, the AI Office was established in June 2024. It plays a central role in regulatory enforcement, safety evaluations, and the development of Codes of Practice.




The Role of the CoP and Their Place in AI Regulation

The CoP are a key component of the AI Act’s enforcement framework, providing interim compliance guidelines for GPAIS while formal standards are being developed. Introduced in Article 56 of the AI Act, they serve as non-binding recommendations that guide AI developers in meeting their regulatory obligations before the adoption of harmonised standards, which are expected to take at least three years to finalise.

As participants in the EU AI Act’s CoP process, we contributed to Working Group 2 (Risk Identification and Assessment for Systemic Risks) and Working Group 3 (Risk Mitigation Measures for Systemic Risks). These workstreams focused on defining systemic risks for GPAIS and establishing mitigation measures to address them.

The drafting process involved AI developers, regulators, civil society, and technical experts, reflecting a multi-stakeholder approach. One key challenge was balancing comprehensive risk assessment with practical implementation, ensuring that obligations were both technically feasible and effective. Discussions also considered the relationship between the CoP and future binding standards, with the Codes serving as a foundation for longer-term regulatory enforcement.

Although the CoPs are voluntary, compliance with them grants a presumption of conformity with the AI Act’s requirements - making it de-facto regulation. This means that AI developers who adhere to the measures outlined in the CoPs will be considered compliant with their regulatory obligations unless proven otherwise. By mitigating regulatory uncertainty, the CoP enables industry actors to adopt early compliance measures while AI governance frameworks evolve in response to emerging risks and technological advancements.

The Role of Standards in Risk Mitigation for Both Regulation and Markets

The AI Objectives Institute (AOI) has examined how standardisation frameworks serve as both regulatory compliance tools and market governance mechanisms. While regulatory enforcement ensures baseline safety measures, market-based approaches—such as insurance, auditing, procurement, and due diligence—can embed risk management directly into economic systems. A demand-based approach integrating market demand with regulatory mandates could ensure that the costs of compliance are recouped by firms - however no such prerogative was invoked in the CoP process.


Well-defined standards also provide clarity for investors and enterprises, enabling the development of AI risk management frameworks that integrate into procurement, liability assessments, and contractual obligations.

AI risk is increasingly recognised as a material business risk, leading financial institutions and insurers to integrate AI governance into risk assessment models. Enterprises are requiring certified AI assurance frameworks before adopting AI solutions, while insurers are exploring AI-specific policies that factor in compliance with recognised governance standards. These market-driven mechanisms could mirror regulatory objectives by aligning economic incentives with risk management, fostering both innovation and accountability.

Beyond compliance, standards shape industry-wide expectations. Procurement policies are evolving to favour AI systems that adhere to defined safety benchmarks, ensuring that AI governance extends beyond regulatory mandates into broader market expectations. This trend highlights how standardisation reduces uncertainty, allowing businesses to adopt AI with clearer guardrails and risk mitigation strategies.

Standardisation plays a key role in ensuring that AI risk management remains proportional rather than overly restrictive. Instead of rigid regulatory mandates, well-designed standards allow businesses to adopt risk mitigation practices that scale with their operations. By integrating market incentives with compliance frameworks, standardisation ensures that AI governance supports both safety and technological development.

As AI risk management evolves, standardisation will serve as a critical link between regulatory compliance and market incentives, shaping both enforcement strategies and economic structures. The integration of regulatory and market-driven approaches will be essential to ensuring AI safety while fostering innovation and enterprise adoption.


Next
Next

AI Governance through Markets