Headlines

The European Union is set to adopt the most comprehensive set of laws to regulate artificial intelligence (AI) with the EU AI Act.

The European Union (EU) has recently passed the world’s first major legislation on artificial intelligence (AI), known as the EU AI Act. This landmark legislation aims to regulate AI systems to ensure they are safe, transparent, and respect fundamental rights and EU values. The EU AI Act is particularly focused on high-risk AI models and seeks to establish rules that will govern their development and use.

One of the key aspects of the EU AI Act is its regulation of general-purpose AI models, such as GPT (Generative Pre-trained Transformer) which powers ChatGPT, developed by OpenAI in San Francisco, California. These models have broad and unpredictable uses, making them potentially risky. The law outlines strict rules for developers of these models, requiring them to demonstrate safety, transparency, and adherence to privacy regulations.

The EU’s approach to regulating AI is based on the potential risks posed by different applications. High-risk AI systems, such as those used in hiring and law enforcement, must meet specific obligations to ensure they do not discriminate or pose unacceptable risks to individuals. Lower-risk AI tools must also disclose to users when they are interacting with AI-generated content.

While some researchers have welcomed the EU AI Act for its potential to encourage open science, others worry that it could stifle innovation. Critics point to exemptions for military and national security purposes, as well as potential loopholes for AI use in law enforcement and migration. However, the EU has made efforts to ensure that the legislation does not negatively impact research, particularly exempting AI models developed purely for research, development, or prototyping.

For powerful general-purpose models like GPT, the EU AI Act creates a two-tier regulatory system. Models that are not used exclusively for research or published under an open-source license must meet transparency requirements and demonstrate compliance with copyright laws. High-impact models, which pose a higher systemic risk, will face even stricter obligations, including safety testing and cybersecurity checks.

Enforcement of the EU AI Act will fall to the European Commission, which will establish an AI Office to oversee general-purpose models. This office will work with independent experts to evaluate model capabilities and monitor related risks. However, questions remain about the resources and capacity of a public body to adequately scrutinize submissions from companies like OpenAI.

The EU AI Act is set to enter into force in 2026, pending final sign-off from the European Parliament. As the first comprehensive set of laws to regulate AI, the EU’s approach may serve as a blueprint for other countries and regions considering their own AI legislation. As the AI landscape continues to evolve, it is crucial for policymakers and researchers to work together to ensure that AI is developed and used responsibly, in line with societal values and norms.

Source: doi: https://doi.org/10.1038/d41586-024-00497-8

www.nature.com/articles/d41586-024-00497-8