This past week, European Union policy makers agreed to the AI Act, one of the first significant frameworks to govern the technology. This potential blueprint stands as a guide for European leaders to create the first major regulation governing AI in the western world.
While implementation remains a minimum of two years away, pending ratification, with U.S. leadership, the EU AI Act - combined with elements from President Biden’s AI Executive Order - represents a potential groundwork for a global policy linking the People’s Republic of China to the West.
The EU AI Act is the most comprehensive set of guidelines for AI regulation to date. The act covers most of the concerning aspects of the technology: the capability for mass disinformation; concerns about self-learning systems developing and then pursuing their own objectives; and protection of intellectual property absorbed by large language models.
The framework also seeks to protect citizens and individuals from the most intrusive elements of AI through prohibitions on biometric systems that classify individuals based on race or sexual orientation. The guidelines also protect against the unregulated collection of facial data. The legislation does permit law enforcement to employ biometric identification in public areas for specific criminal activities.
The EU framework addresses the two opposing impulses that any effective AI regulation must harmonize: safeguarding against potential harm without impeding innovation. The EU proposal manages both risk reduction and technological advancement. Once enacted, companies developing high-risk AI tools - classified as such due to their potetial for social harm - must undergo stringent assessments and provide transparency in their operations.
Registration will not slow down acceleration. Developers can continue to advance a system once it is under governmental review. Based on the public information released by the EU Parliament, it is unclear what body will determine which tools pose a high enough risk to require this additional scrutiny or what risk threshold will initiate this further review.
As Europe emerges as a frontrunner in comprehensive AI regulation, this momentous step offers a unique opportunity to establish a broader framework for AI governance, potentially aligning with the U.S. and China. With China already having enacted sophisticated AI regulations and publicly expressing willingness for international cooperation last month, the EU structure presents a point around which global powers can pivot on a coordinated set of rules.
China is ahead of both Europe and the U.S., with a set of AI regulations implemented this past August - though its laws are not as comprehensive as the EU act. China has been at the forefront of regulatory development for the past two years while the Biden administration and the EU Parliament struggled to set boundaries around the technology as it advanced exponentially. China’s regulations smartly balance the need for safety and copyright protection while fostering a climate that supports innovation.
The new American rules, some of which go into effect in the next three months, establishes reporting requirements - but no binding restrictions - for the computing hardware required for advanced AI models. Like the EU rules, the American legislation addresses the technology’s ability to produce massive amounts of disinformation very quickly. The press release announcing the EU act offers no details about the control mechanism for this concern. The American rules include mandatory watermarking of distorted photos and deep fake videos produced by AI systems.
The PRC regulations, formally the Cyberspace Administration of China Generative AI Measures, are, in some ways, more stringent than those of the EU In contrast to the EU and US approaches, which specifically target only high-risk AI models for government registration, China mandates all AI developers register with a government-established algorithm registry. This comprehensive registry in China collects detailed information about the training processes of algorithms. Additionally, it obliges AI developers to conduct and pass a security self-assessment.
Robust enforcement
Of the three sets of regulations, the PRC’s offer the most robust enforcement mechanism - likely because this is the only set of laws already in effect. The PRC regulations mandate that generative AI services must not create material that encourages the overthrow of national sovereignty of any state, nor should they promote terrorism, extremism, ethnic hatred, violence or obscenity, including the dissemination of false and harmful content. This puts a cap on the output on PRC large language models.
China’s AI legislation is also rooted in Beijing’s program of censorship and propaganda. For example, China forbids AI developers from creating chatbots that criticize the PRC - a law that presents challenges with self-learning large language models. These regulations force AI firms in China to enact content moderation as rigorously and subjectively as Chinese social media platforms do.
This kind of suppression embedded in the PRC regulations contrasts with the principles upheld in liberal democracies. Engaging with Chinese regulatory authorities toward a global AI law policy could inadvertently validate these severe policies on speech control, creating an international authoritarianism. A global set of standards must navigate the PRC’s internal state control with the democratic value of freedom of expression.
The future of AI regulation will depend on the ability to adapt to rapidly evolving technologies and the changing global landscape. The experiences of China, the EU, and the U.S. in regulating AI offer a roadmap for developing adaptable international regulations. In all three cases, the regulations were years in development, modified as the technology advanced. Similarly, A global standard must be open to modification as self-learning systems advance.
As with all forms of diplomacy, effective global governance of AI will also require continuous dialogue, exchange of ideas, and a willingness to learn from each other’s experiences.
The agreement on the EU AI Act, China’s readiness for international cooperation on AI governance and the new U.S. rules memorialized in the president’s executive order represent a pivotal moment in the history of AI This convergence offers an opportunity to frame the future of a technology that will soon shape every aspect of society.
U.S. Army Colonel (retired) Joe Buccino serves as an advisor to the Center for AI Policy, a nonpartisan research organization that develops policy and conducts advocacy to mitigate risks from AI. His views do not necessarily reflect those of the U.S. Department of Defense or any other organization.