With Generative AI evolving at full throttle, a bright future beckons. The technology has the potential to allow enhanced medical diagnostics, personalized education, optimized resource allocation in humanitarian efforts, and solutions to mitigate the most harmful effects of global warming. However, these same tools may also introduce profound societal disruption.
The technology may soon allow malevolent actors to unleash pandemics, supercharge massive cyberattacks, and accelerate the spread of disinformation. More alarmingly, AI systems may evolve to deceive their human creators, establishing their own objectives in stark contradiction to societal norms. Generative AI is poised to cross thresholds we’ve never anticipated and for which society is unprepared.
This concern is not fear mongering. It’s based on the capability as it exists now and the speed with which it advances. Creator of ChatGPT Sam Altman anticipates superintelligence AI capable of surpassing an expert level of skill in most fields and more powerful than any technology ever developed in the next decade.
Dario Amodei, former vice president for research at Open AI and current Anthropic CEO believes AI may grow too independent for human control in the next two years. Finally, computer scientist Geoffrey Hinton, considered by many “The Godfather of AI” resigned from Google earlier this year over concerns that AI may be more intelligent than we know at this moment.
Despite these warnings, generative AI lacks any governing body, either internationally or within the United States. The technology is moving so fast without any safeguards that the risks grow exponentially. The time to establish guardrails is now and the U.S. must lead in this effort.
Senators Richard Blumenthal and Josh Hawley champion a bipartisan framework for AI legislation that deserves attention. The proposal involves the initiation of a federal independent oversight body to manage the coming evolutions of the technology. This new U.S. government entity will monitor the sale, purchase, or transfer of computational resources exceeding specific thresholds. Microsoft President Brad Smith and the Center for AI Policy publicly support the framework.
Sam Altman, CEO of OpenAI, warned the Senate Judiciary Committee on May 16 of a need for a federal agency that licenses any effort above a certain scale of capabilities and capable of shutting down companies that do not comply.
Given the catastrophic potential for misuse, the development of such a licensing structure for General-Purpose AI is a proactive step in establishing a gatekeeping mechanism. The structure must establish regulatory thresholds for computational prowess, developmental cost, and benchmark performance. As the technology advances, these thresholds must be periodically reviewed and, when necessary, updated.
While the Blumental-Hawley legislation offers a structure around which to build legislation and a regulatory body, it lacks any significant details. The concept must develop around regulation of the three key resources in AI development: computing hardware, talent, and data. All three are necessary for significant advances in Generative AI models. While any regulatory body would be challenged to track and legislate the use of data, a regulatory guideline can account for both compute hardware and talent. Capping either one, hardware or talent, would allow a safety valve over the technology.
Computing hardware is the currency of AI power. Access to high-performance Graphic Processing Units (GPUs) capable of enabling deep learning is a determining factor in the future of the technology. GPUs represent a physical aspect of AI that can be monitored and registered. In fact, massive amounts of GPUs are required to train models on large data-sets.
A regulatory framework must require a federal license for purchase of anything over an identified high-risk computer hardware threshold, for example, GPU sets capable of developing an AI that can process more than a septillion (1024) operations. The legislation must require monitoring and reporting of the transfer of any GPU sets beyond the high-risk threshold.
Human talent is required to employ the computing hardware. Trained AI professionals train AI sets to unlock the next advancements. Talent costs money - a lot of it - both to train and employ AI scientists and researchers. Here again, development of a specific high-risk threshold, for example, $50 million in developing and employing human talent, is necessary for a regulatory framework. Any amount above the high-risk threshold must be reported and examined for possibly catastrophic use.
Meanwhile, to keep pace with the rest of the world and prevent stifling of the beneficial aspects of the technology, the regulation must include a fast-track system for benign AI applications. This system gets AI developers who aren’t posing any major security risks out from under the government’s authority. Engineers working on AI tools that are not dangerous - self-driving cars, fraud detection systems, and recommender engines – carry on with their work, even if they exceed the hardware or talent thresholds.
The U.S. cannot wait for the international community to develop an AI regulatory body. The Blumental-Hawley framework offers an opportunity for the U.S. to lead the world on AI regulation. International collaborations with bodies like the UK’s Frontier AI Taskforce, a team of senior academics appointed to advise the Tory government, can foster information sharing. Multilateral dialogues and global AI safety forums based on the framework will be pivotal in navigating the global challenges of AI. Over time, outreach to NATO countries should focus on the development of an international body overseeing AI governance, with a goal of incorporating the People’s Republic of China.
The Blumenthal-Hawley concept is a necessary guidepost along the world’s AI journey. It presents a vision of a future where AI is simultaneously safe and salubrious. By embracing and building on this framework, Congress has the opportunity to set a global gold standard for AI regulation. America, with its penchant for innovation and safety, is poised to lead the world not just in AI development, but also in AI stewardship.
Joe Buccino is a retired U.S. Army Colonel who serves as an AI research analyst with the U.S. Department of Defense Defense Innovation Board. His views do not necessarily reflect those of the U.S. Department of Defense or any other organization.