Congress is grappling with a new enemy that has infiltrated the American public in one of its most vulnerable spots: social media.

Deepfakes, which are created to manipulate audio and video in a way that is indistinguishable to most people, have become a topic of national discussion after a manipulated video of Nancy Pelosi appearing to slur her speech went viral. The House Intelligence Committee held hearings June 13 on deepfake technology. Rep Adam Schiff, D-Calif., the chairman of the House Permanent Select Committee on Intelligence, warned of a “technical revolution that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic.”

During the hearing, the committee heard testimony on the national security implications of deepfakes and the challenges lawmakers will face in combating them before the 2020 election.

Clint Watts, distinguished research fellow at the Foreign Policy Research Institute, said that not only does the government need to have a plan to combat deepfakes, but they also must deploy that plan in a quick fashion.

“It comes down to who’s there first and who’s there the most and that’s the danger of social media computational propaganda,” he said.

This same sentiment was echoed from other witnesses as well.

“Let there be no question that this is a race,” said David Doermann, a professor at the University of Buffalo and expert on artificial intelligence. “The better manipulators get, the better detectors need to be and there are certainly orders of magnitude more manipulators than there are detectors.”

Doermann previous worked with the Defense Advanced Research Projects Agency, as a program manager, starting the media forensics program, Medifor. Medifor was created as a government response to address issues related to manipulated media, such as deepfakes. He discribes the solution as more of a marathon than a sprint.

“The program was assigned to address both current and future manipulation capabilities, not with a single point solution, but with a comprehensive approach,” Doermann said. “What was not expected, however, was the speed at which this technology would evolve.”

Adobe and UC Berkeley researchers announced a partnership June 14 with the sponsorship of the DARPA Medifor program that allows for image manipulation detection of images edited with Photoshop’s Face Aware Liquify feature, which allows users to adjust and exaggerate facial features.

One of the problems Congress can address with this issue is enforcing liability for publishing this content. Witnesses said it is often difficult to prosecute publishers of deepfakes because there is no law that assigns them this liability.

“It’s called ‘Good Samaritan blocking and filtering of offensive content’ and it’s been interpreted really broadly to say that if you under filter content, if you don’t engage in any self-monitoring at all, even if you encourage abuse, you’re immune from liability for user-generated content,” said Danielle Citron, professor at the University of Maryland’s Francis King Carey School of Law.

However, even if Congress could help eliminate the artificial intelligence that creates deepfakes, it could cause problems for researchers.

“Similar technologies used in the production of synthetic media or deepfakes are also likely to be used in valuable scientific research,” said Jack Clark, the policy director at the research organization OpenAI. “They’re used by scientists to allow people with hearing issues to understand what other people are saying to them or they’re used in molecular assay or other things which may revolutionize medicine.”

Witnesses also explained how the accessibility of technology is also accelerating the issue.

“You used to have to go out and buy Photoshop or you know have some of these desktop editors. Now a high school student with a good computer, and if they’re a gamer they already have a good [graphics] card, can download this, can download data, and train this type of thing overnight with software that’s open and freely available,” said Doermann. “It is not something that you have to an AI expert to run. A novice can run these types of things.”

The committee and witnesses also expressed concern that deepfakes were thriving in the current media environment.

“There is license to call things fake that are true but are critical and it seems that that’s a pretty fertile ground for the proliferation of information that is truly fake,” Schiff said.

Besides portraying situations that did not actually happened, Citron expressed concern that people will start to claim real videos that do not reflect well on someone are deepfakes, a characteristic she calls the “liar’s dividend.”

“I think it’s worth noting too that when President Trump referred to the Access Hollywood tape, he said well that never really happened,” said Citron. “We’ve already seen the liar’s dividend in practice from the highest of the bully pulpits, so I think we’ve got a real problem on our hands.”

Kelsey Reichmann is a general assignment editorial fellow supporting Defense News, Fifth Domain, C4ISRNET and Federal Times. She attended California State University.

Share:
In Other News
Load More