Regulatory

Global: Europe Rallies Global Experts to Draft Groundbreaking AI ‘Code of Practice’

0
Europe Rallies Global Experts to Draft Groundbreaking AI ‘Code of Practice’
Share this article

The European Union is assembling top international experts to develop the first-ever “Code of Practice” for general-purpose AI models, aiming to set new benchmarks for transparency and risk management under its groundbreaking AI Act.

This initiative, led by the European AI Office, seeks to shape the future of artificial intelligence by drafting a comprehensive framework that addresses critical issues such as transparency, copyright, risk assessment, and internal governance for AI systems. The Code of Practice will focus on AI models like large language models (LLMs) and systems integrated across multiple sectors.

Global Collaboration for AI Standards

Announced on September 30, the effort brings together nearly 1,000 participants from academia, industry, and civil society to contribute to this months-long process. The drafting process will conclude in April 2025, culminating in a robust regulatory standard for AI models.

The kick-off plenary, held online, marked the official start of the collaborative work, with four key working groups established to tackle specific areas of the Code. These groups, led by renowned experts such as AI researcher Nuria Oliver and copyright law specialist Alexander Peukert, will focus on:

  1. Transparency and copyright
  2. Risk identification and technical risk mitigation
  3. Internal governance and risk management

These working groups will meet regularly over the next several months to refine the provisions, gather input from stakeholders, and draft the final version of the Code of Practice.

EU’s Leadership in AI Governance

The EU’s AI Act, which was passed by the European Parliament in March 2024, is a pioneering piece of legislation designed to regulate artificial intelligence across Europe. It establishes a risk-based framework, classifying AI systems into various risk levels—ranging from minimal to unacceptable—and mandates specific compliance measures based on the level of risk.

General-purpose AI models, given their wide-ranging applications and potential societal impact, are often classified under higher-risk categories. The Code of Practice will play a critical role in ensuring that these AI systems are developed and deployed responsibly, minimizing risks while promoting innovation.

Despite some criticism from major AI companies like Meta, which argue that the regulations may stifle innovation, the EU’s multi-stakeholder approach is designed to strike a balance between safety, ethics, and fostering technological advancement.

Influencing Global AI Policy

With over 430 submissions already received from stakeholders, the EU’s efforts to draft this Code of Practice are setting the stage for a global standard in AI governance. By April 2025, the finalized Code is expected to establish a model for the responsible development and management of general-purpose AI models, offering guidance to countries worldwide as they craft their own AI regulations.

As the AI landscape continues to evolve, the EU’s leadership in AI regulation is likely to have far-reaching implications, influencing global policies and practices surrounding emerging technologies.

Share this article

Nigeria: Polaris Bank Recognized as Nigeria’s Leading Bank in MSME Lending

Previous article

African Tech Startups Secure $306 Million in Q3 2024 Amid Ongoing Funding Decline

Next article

You may also like

Comments

Comments are closed.

More in Regulatory