The federal government has put forward a number of policy blueprints to ensure ethical and transparent use of artificial intelligence in government. They mean little if agencies lack the leadership and workers to enforce them, which they currently do, tech experts said before Senate lawmakers on Tuesday.

AI has been gaining popularity over time as agencies have been attracted to the idea of automating rote processes. The concern is that without an expert workforce to guide its trajectory, AI could lead to situation where agencies are riding without training wheels, causing harm to themselves and others, experts said before the Senate Committee on Homeland Security & Government Affairs on May 16.

Consider: there is a vacancy for the director of the National Artificial Intelligence Initiative Office within the White House — a role that is key to coordinating AI policy and research across government. There are agencies who’ve yet to install a chief AI officer. There are some 40,000 cybersecurity jobs to fill in the public sector to support government’s approach to ethical, safe AI use. During a separate hearing the CEO of ChatGPT himself urged government intervention as necessary to mitigate risks of powerful AI systems.

Such leadership holes make it difficult for agencies to craft and curb the technology, said Lynne Parker, a professor at the University of Tennessee and the former director of NAIIO. Experts urged specific guardrails for automated technology that at its most extreme can formulaically cut subsistence funding for Medicaid beneficiaries without so much as a review by a benefits officer — an example given by a witness attorney for the American Civil Liberties Union.

“Technical leadership is so absolutely critical,” said Daniel Ho, a professor at Stanford University, before the committee. Less than 2% of AI PhDs end up in government, according to the latest AI Index report by Stanford.

Agencies need to see what they can do immediately to grease the wheels of hiring, experts said. They can start by building out existing programs like the Presidential Innovation Fellows program, Ho recommended. Then, up-skill those who government already has with a bootcamp-like training academy under the U.S. Digital Services. Finally, the Office of Personnel Management needs to carve out an occupational series for AI that was due last July.

OPM did not immediately respond to requests for comment regarding the timeline for the job classifications.

It’s also no secret to federal recruiters that tech professionals, with and without college degrees, are lured by unchecked salaries and exciting projects in the private sector. Ho said international students are also “choosing to leave the country” for other states, like Canada, which ranks fourth out of 54 countries in the Global AI Index for competitiveness.

Though many international students may come to the U.S. to help fill out its tech ranks, research by the Icef Monitor, a market research arm, says roughly one in three international students doesn’t feel prepared to find a career in the U.S.

Sen. Alex Padilla (D-Ca.) made a point during Tuesday’s hearing that the country’s unwieldy immigration system is what imposes hardships on foreign students who want to stay for a job.

OPM has also been tasked with reporting on the two- and five-year AI career needs by Congress’ AI in Government Act of 2020.

In the meantime, panelists said Congress can help agencies know what to recruit for by ordering the development of a National Initiative for AI Education Framework, which would help aspiring students know what minimum qualifications they need.

Parker also suggested an AI chief officers council similar to the one for chief human capitol officers that would serve as a central authority for policy questions and AI use cases. Agencies need an expert body to walk them through use cases that can vary wildly depending on whether an agency wants to use it for law enforcement or administrative tasks.

“As AI technology advances, responsible management of AI systems will be challenging if the skills necessary to successfully develop, buy or use AI capabilities are lacking,” said Taka Ariga, chief data scientist at the Government Accountability Office.

With reporting by the Associated Press.

Molly Weisner is a staff reporter for Federal Times where she covers labor, policy and contracting pertaining to the government workforce. She made previous stops at USA Today and McClatchy as a digital producer, and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.

In Other News
Load More