It’s no surprise that the private sector is collectively investing countless billions of dollars into AI research and use this year. The President’s Fiscal Year 2025 budget request asks for a much more modest sum – probably on the order of a few billion dollars— for AI-related activity spread across the hundreds of large and small non-Defense Department agencies of the U.S. government.

Given this imbalance in level of effort, a logical question to ask is whether the federal government can make a difference in an area as large and fast moving as AI.

Leading by executive orders, frameworks

Last October, the Biden Administration released EO 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO noted that, while AI has great potential, “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

The order outlined a set of principles including one that will require developers of the largest Generative AI (GenAI) foundational models to share their safety test results and other critical information with the U.S. government before these models are put into public use.

The National Institute of Standards and Technology (NIST) is directed to set rigorous standards for extensive red-team testing to ensure safety, and the Department of Homeland Security will apply these NIST standards to GenAI used in critical infrastructure. This directive was followed last February by EO 14034 focused on data stewardship, which noted that malicious actors can “use access to bulk data sets to fuel the creation and refinement of AI and threaten national security” and which set out to provide safeguards against adversarial use of the bulk data on Americans in large language model GenAI.

NIST is a veritable treasure trove of guidance, frameworks and case studies for agencies and businesses who don’t have the time, money or expertise to develop their own strategies, especially in an area as dynamic as AI. NIST generally has a lot of the answers, and they’re usually written in ways that are actionable and relatively easy to understand. A case in point is NIST AI 100-1 –its recently released Artificial Intelligence Risk Management Framework (AI RMF).

The goal of the AI RMF is to offer a resource for organizations designing, developing, deploying or using AI systems to “help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” NIST Frameworks tend to be widely adopted by the private sector since they are vendor and solution-agnostic, and readily adaptable by organizations of all sizes and sectors, and I expect this will prove true for the AI RMF as well.

Federal AI spending

Federal civilian agency AI-related spending is largely focused in the areas of facilitating broad participation in research and managing isks and abuses in the marketplace.

Looking at some of the highlights for Federal AI research funding, the FY 2025 budget request would provide the National Science Foundation (NSF) with $2 billion to support R&D in critical emerging technology areas that align to the CHIPS and Science Act priority of boosting US competitiveness in science and technology—with some unknown portion of this going to AI. The budget request also includes $455 million for the Department of Energy to engage in AI-related projects that increase AI’s safety, security, and resilience.

On a smaller scale, $30 million is requested for the NSF to support the second and final year of the National AI Research Resource pilot, an initiative that “aims to democratize AI research and innovation by providing access to the computing, data, software and educational resources needed to fully conduct their research into trustworthy AI” and to train the next generation of researchers. NIST is part of the Department of Commerce, which would receive $65 million to safeguard, regulate, and promote AI, including establishing a US AI Safety Institute tasked with operationalizing NIST’s AI RMF.

Other budgetary priorities relate to specific department and agency activity to help manage AI-related risk. We are all familiar with the ongoing debate over the safety of self-driving cars, and the budget request provides funding to the Department of Transportation’s Office of Automation Safety and the National Highway Traffic Safety Administration to address cyber security as well as AI-related risks.

The Department of Health and Human Services would receive $141 million for cybersecurity and information system improvements to “promote the use of artificial intelligence in healthcare and public health while protecting against its risks.”

While these are just examples of the range of initiatives and activities the federal government has underway in AI or intends to begin (funding permitting), on balance these reflect priorities that collectively broad but individually well-focused. While it may be difficult to measure their success in the short term, these are sound priorities for government action in the area of AI.

Jim Richberg is Fortinet Head of Cyber Policy, Global Field CISO and a Fortinet Federal Board Member.

In Other News
Load More