Mihai Surdu / Shutterstock

DARPA Is Taking On the Deepfake Problem

The agency wants to teach computers to detect errors in manipulated media using logic and common sense.

The Defense Department is looking to build tools that can quickly detect deepfakes and other manipulated media amid the growing threat of “large-scale, automated disinformation attacks.”

The Defense Advanced Research Projects Agency on Tuesday announced it would host a proposers day for an upcoming initiative focused on curbing the spread of malicious deepfakes, shockingly realistic but forged images, audio and videos generated by artificial intelligence. Under the Semantic Forensics program, or SemaFor, researchers aim to help computers use common sense and logical reasoning to detect manipulated media.

As global adversaries enhance their technological capabilities, deepfakes and other advanced disinformation tactics are becoming a top concern for the national security community. Russia already showed the potential of fake media to sway public opinion during the 2016 election, and as deepfake tools become more advanced and readily available, experts worry bad actors will use the tech to fuel increasingly powerful influence campaigns.

Industry has started developing tech that use statistical methods to determine if a video or image has been manipulated, but existing tools “are quickly becoming insufficient” as manipulation techniques continue to advance, according to DARPA.

Related: Fight Deepfakes with Cyberweapons and Sanctions, Experts Tell Congress

Related: The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth

Related: How Realistic Fake Video Threatens Democracies

“Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources,” officials said in a post on FedBizOpps. 

However, they added, manipulated media often contains “semantic errors” that existing detection tools often overlook. By teaching computers to catch these mistakes—such as mismatched earrings on a person—researchers can make it harder for digital forgers to fly under the radar. 

Beyond simply detecting errors, officials also want the tools to attribute the media to different groups and determine whether the content was manipulated for nefarious purposes. Using that information, the tech would flag posts for human review.

“A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies,” DARPA officials said. 

But that’s easier said than done. Today, even the most advanced machine intelligence platforms have a tough time understanding the world beyond their training data. In the years ahead, DARPA plans to pour significant resources into building machines capable of common sense reasoning and logic.

The agency will host a proposers day for the SemaFor program on Aug. 28, and groups interested in attending must register by Aug. 21. Officials anticipate releasing a broad agency announcement on the program in the coming weeks.