As far as the U.S. military and federal government are concerned, speed is the name of the game when it comes to adoption and regulation of artificial intelligence.

“My administration places the highest urgency on governing the development and use of AI safely and responsibly,” said President Joe Biden in October.

“We face threats at machine speed and adversaries unbounded by bureaucracy,” says CISA’s strategic plan.

“We can’t move so fast that we do flawed legislation, but there’s no time for waste or delay or sitting back,” said Sen. Chuck Schumer the Senate floor last year. “We’ve got to move fast.”

“We will be keeping up the momentum, ensuring we make the best possible use of AI technology responsibly and at speed,” said Deputy Secretary of Defense Kathleen Hicks in November.

In the last few years, three executive orders, two laws (alongside another introduced Wednesday) and countless frameworks, agency-level policies and strategic plans have been unveiled to charge federal agencies big and small with AI adoption.

Whether it’s a matter of national security, warfighting or delivering government services more efficiency, the federal government’s desire and duty to become as technologically equipped as any other sector has been repeatedly and publicly communicated. Why? Because trust in government is at record lows, and federal agencies want to show they are responsive to the people’s needs, and they think AI can help. For the U.S. military, the State Department and intelligence agencies, the urgency also stems from a sense of a heightened threat environment.

“China views AI as a means to gain a strategic advantage over the United States and its allies,” analysts from the German Marshall Fund, a non-partisan American public policy think tank, wrote in a 2023 article. “It intends to use AI to build a world-class military.”

But what Congress and the White House expect and what is realistic may not always be in sync. A study by the consultancy firm Ernst and Young shows that less than a third of government officials leading the charge on AI believe the federal government will become a quick adopter of such technologies in the next year.

A majority of officials surveyed projected that would become a reality three or more years from now, according to the results.

There are a few possible reasons for that. Among the most obvious are budget issues, which 58% of respondents said was “very” or “extremely” challenging. The most cited challenge was lack of personnel, and data quality and security were also widely mentioned, according to the report’s survey of 200 federal government IT decision-makers.

In other cases, the spirit of innovation may be stunted by the 49% who said concerns over AI usage within an organization are obstacles.

“In government, we are struggling to move our policy and our standards process at the pace that AI is moving,” said Suzette Kent, former federal CIO of Office of Management and Budget at a Meritalk event on Jan. 30.

Leadership discrepancies

Another issue that could be an impediment to synchronized AI efforts within agencies and between them is the role of leadership. In some agencies, that’s taken the form of a chief AI officer or a chief data officer — or both. But the presence of a c-suite leader doesn’t guarantee that they have all the resources, authority or respect to fully execute the duties of their office.

Sixty percent of respondents said their CDO has enough money to do their job. About 65% said that official has enough internal resources.

The other lingering challenge is mixed perceptions of who is actually responsible for the data that underlies AI. Fewer than half of respondents said that’s the CDO’s job. Other respondents said it’s the responsibility of an entire team, a single, non-CDO employee, a committee of staff members who primarily do other tasks or a part-time staff member.

“Entrusting (or burdening) employees with data governance who have other duties may mean that this crucial step is left on the back burner,” the report said.

Regardless of whose in that seat, government recognizes data needs to be governed in some way, especially because the majority of agencies rely on their internal data to train their AI.

AI viewed differently by defense, civilian agencies

Though the government is often thought of as one lump entity, the reality is within government, agencies are approaching AI in different ways and for different reasons.

“Defense respondents, for example, are almost twice as likely to report using sensor data to train their AI models as their civilian counterparts,” according to the data.

And while AI is currently being used to develop unmanned vehicles and autonomous weapon systems at the Pentagon, it’s also being used for more ordinary, but no less important, tasks.

At this stage, 76% of civilian agencies are using AI as a data analysis tool. Other popular use cases for them are document analysis, process execution, predictive analytics and chatbots or virtual assistants.

For example, the Office of Personnel Management has identified two instances where AI can help suggest openings to applicants on USAJobs based on their skills and background. The Social Security Administration sees potential for AI to extract data from scanned images of paystubs to enable faster processing of benefits. And at the Department of Veterans Affairs, there’s perhaps a role for AI to play in predicting suicide based on clinical records.

These are just three of the 1,200 planned or current AI use cases across non-defense agencies identified by the Government Accountability Office and on file with the Biden administration.

Despite many agencies having ideas for how to use AI, the vast majority are still in the planning phase and are not yet in use.

The online survey conducted in 2023 polled 200 federal IT officials across more than two dozen agencies who work on AI policies. About 60% work for civilian agencies, and most respondents fell between the GS-12/O-4 to GS-15/O-6 pay scale.

Molly Weisner is a staff reporter for Federal Times where she covers labor, policy and contracting pertaining to the government workforce. She made previous stops at USA Today and McClatchy as a digital producer, and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.

In Other News
Load More