On 1 August 2024, the European Union Artificial Intelligence Act (EU AI Act) officially came into force, establishing the world’s first all-encompassing regulatory framework for AI systems based on risk levels. This Sunday, 2 February 2025, marks the activation of the Act’s initial provisions, including the prohibition of AI systems deemed to pose an “unacceptable risk” and the enforcement of AI literacy requirements.
Effective immediately, any use or marketing of AI systems that present an “unacceptable risk” within the EU is banned. Organizations found in violation of these rules face penalties that can reach up to EUR 35 million or 7% of their global annual turnover.
The Act specifies a range of activities that are strictly forbidden, including:
- Deployment of harmful subliminal, manipulative, and deceptive techniques
- Exploitation of vulnerabilities in harmful ways
- Implementation of unacceptable social scoring systems
- Conducting individual crime risk assessments and predictions (with certain exceptions)
- Engaging in untargeted scraping of internet or CCTV data to develop or expand facial recognition databases
- Utilizing emotion recognition technologies in workplace and educational settings (with specific exceptions)
- Employing biometric categorization to infer sensitive personal data (with certain exceptions)
- Performing real-time remote biometric identification (RBI) in public spaces for law enforcement (with certain exceptions)
Fiona Ghosh, a partner at Ashurst, commented on the broader implications:
“As the EU AI Act is implemented in phases, Europe’s regulatory environment is set to transform considerably. This development comes at a time when other regions, notably the US—and to a lesser extent the UK—are indicating a move away from such stringent regulation, potentially driven by global competitive pressures. How this divergence will evolve remains to be seen.”
Importantly, the EU AI Act applies to both providers and deployers regardless of their geographic location. This means that even US-based companies operating in the EU will need to adhere to the new rules, despite the absence of similar federal AI legislation in the United States.
Marcus Evans, partner at Norton Rose Fulbright, elaborated:
“The AI Act’s global reach is significant. It applies not only to organizations within the EU that use or supply AI systems, but also to any entity that provides or distributes AI, or whose AI outputs are used within the EU market. For example, a company using AI for recruitment in the EU—despite being based abroad—will be subject to these regulations.”
In addition to the prohibitions on unacceptable risk, 2 February 2025 is also the deadline for meeting the Act’s AI literacy requirements. These provisions are intended to ensure that organizations invest in proper training and upskilling for teams managing AI systems.
Matt Worsfold, risk advisory partner at Ashurst, advised businesses to view this deadline as a call to action:
“Companies should use this timeline as an opportunity to begin developing a compliance strategy. With about 18 months remaining until the more comprehensive requirements of the Act come into play, it is critical for organizations to start the process of identifying and cataloguing all AI systems and use cases. Given that many companies have been using or developing AI for years without centralized records, and often collaborate with third parties, this task is expected to be both challenging and time-consuming.”
The new regulations underscore the EU’s commitment to ensuring that AI technology is developed and deployed safely and responsibly, with significant implications for both domestic and international organizations operating within its jurisdiction.
Comments