IT & Networks

Can the federal government be sure its AI isn’t biased?

A new report released Nov. 20 by the White House outlined the progress that federal agencies have made on artificial intelligence research, but was light on details on how agencies are combating AI discrimination.

The 40-page report from the White House Office of Science and Technology Policy (OSTP) reviewed nearly four years of AI by the federal departments across eight strategic areas. And while it was a glowing evaluation, it was light on details on how federal agencies are working to increase fairness in AI algorithms.

Right now, there are great concerns about discrimination of AI on the basis of race and gender. In a featured example, OSTP pointed to a National Science Foundation-funded study on fairness in AI. The results from that report demonstrated just how challenging reducing bias can be.

“The researchers showed that … fairness criteria in general do not promote improvement in well-being over time and may in fact cause harm in certain cases,” OSTP wrote in the report, titled “2016-2019 Progress Report: Artificial Intelligence R&D.”

The report emphasized that “a number of” agencies have started research-and-development programs focused on ethical, legal and societal implications of AI, but it was far from specific.

For example, OSTP pointed to the Department of Homeland Security’s work, writing that to “keep ahead of the curve and ensure that use of AI does not unfairly or illegally disadvantage individuals” it is “applying and extending” its existing tools and frameworks, but offering no real insight into how those frameworks were applied, considered successful, or enhancing research. It also wrote that DHS’ Science and Technology Directorate had “identified bias and fairness in AI systems as priority issues," without elaborating on how specifically that’s guided research and development.

The study is one of “several” fairness initiatives being led by NSF, according to the report, which added that other NSF projects focus on “fairness and ethical implications of AI and techniques to mitigate bias and enhance accountability, transparency, and robustness to ensure societal benefit.”

The Defense Advanced Research Projects Agency is working on one project, however, that could enhance fairness. With its Explainable Artificial Intelligence program, DARPA has learned how to have AI write out in English why it made a decision. In one example provided, DARPA showed the AI system a picture of a bird, the system identified it specifically as a downy woodpecker and explained its decision by writing “this is a downy woodpecker because it is a black and white bird with a red spot on its crown.”

“Most importantly, the systems were not canned responses but were generated on the fly by another artificial neural network that was tied to and co-trained with the image classifier,” the report read.

Research and development projects like that are important because they increase trust between the user and the AI, increasing transparency behind the algorithms and allowing “people to spot and correct errors the AI system made when it generalized from its training data."

The stakes are high; the same technology also has implications for warfare, potentially helping distinguish between enemy soldiers and civilians.

Cybersecurity

Artificial intelligence R&D programs at the Department of Energy have also had positive cybersecurity implications. According to the report, the department’s AI research helped it to develop a system that is more efficient at anomaly detection.

“Until recently, anomaly detection approaches suffered from prohibitively high false alarm rates and typically leveraged only a small fraction of available data, thereby limiting their detection capability,” the report read.

To address the false alarms, DoE ingested more sensor data and deep reinforcement training, which essentially better educates the algorithm.

Another specific project at NSF researched the use of adversarial AI to infiltrate neural networks, considered to be a strong network architecture. The NSF-funded researchers tried to build adversarial AI to breach different neural network defenses, and succeeded several times.

“They illustrated the depths of the problem of adversarial examples and the ease with which apparently strong defenses can be defeated,” the report read. “Their efforts made clear that AI security will be a long-lasting problem that will require persistent, detailed analyses to address."

OSTP provided several examples of successful AI projects going on across government, but offered little detail in the 40-page report on what’s next for the government. The report did acknowledge that government agencies excelling in AI need to share their best-practices for creating beneficial public-private partnerships for AI research.

“While private-sector innovation in AI R&D is proceeding at a significant pace, the vast majority of industry R&D is focused on near-term applications," OSTP wrote. "The federal government, on the other hand, invests in high-risk, long-term research and is uniquely positioned to provide leadership and facilitate collaboration.”

Recommended for you
Around The Web
Comments