The Deepfake Report Act would require the Homeland Security Department to study the threats posed by manipulated and synthetic text and imagery.
Bipartisan legislation directing an annual, comprehensive examination into the technology underpinning, and threats posed by super-realistic manipulated media called deepfakes may have found a path forward as an amendment to the Senate’s fiscal 2021 National Defense Authorization Act.
Sens. Rob Portman, R-Ohio and Brian Schatz, D-Hawaii proposed adding the Deepfake Report Act—originally unveiled one year ago—to the annual authorization bill Thursday. The Deepfake Report Act, which passed the Senate in October and was referred to the House Consumer Protection and Commerce Subcommittee, would mandate the Homeland Security Department to investigate the potential impacts of deepfakes and other, related technologically altered content on national and election security.
“As [artificial intelligence] rapidly becomes an intrinsic part of our economy and society, AI-based threats, such as deepfakes, have become an increasing threat to our democracy,” Portman said in a statement. “Addressing the challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy. This bill prepares our country to answer those questions and address concerns by ensuring we have a sound understanding of this issue.”
Deepfakes refer to digitally- and AI-manipulated images, audio and videos that make it appear as if the media’s subjects did or said things they did not. Early iterations of the digitally-forged content were posted by a Reddit user who applied machine learning to insert the faces of American celebrities into pornographic videos, and over time more of the synthetic content has increasingly emerged, including media targeting political leaders—prompting lawmakers to deliberately confront the possible means for disinformation.
“Awareness of deepfake technology has been growing among the public at large,” Matthew F. Ferraro, counsel at WilmerHale LLP who studies, speaks and advises clients on a range of cyber and national security topics, including deepfakes, told Nextgov Monday. “The technology itself is getting better. And policymakers are moving with relative speed to address the feared downsides of manipulated media.”
According to Ferraro, the regulatory state of play around the hyper-realistic, manipulated media “remains in flux.” Presently there are roughly five deepfake-focused bills pending in Congress, as well as legislation pending in nine states, he noted. And in the last year alone, California, Texas and Virginia enacted their own laws reflecting certain kinds of deepfakes. Further, the first federal lawfocused explicitly on deepfakes was passed as part of the 2019 National Defense Authorization Act. Ferraro, who extensively covered the initial law, noted that it “first, requires a comprehensive report on the foreign weaponization of deepfakes, second, requires the government to notify Congress of foreign deepfake-disinformation activities targeting U.S. elections, and, third, establishes a ‘Deepfakes Prize’ competition to encourage the research or commercialization of deepfake-detection technologies.”
“These are important policy innovations, especially around the 2020 election, artificial intelligence and fake news,” he said.
Schatz and Portman’s Deepfake Report Act was originally introduced into both chambers in late June 2019, passed the Senate as a standalone bill by unanimous consent in October and was referred to the House Subcommittee on Consumer Protection and Commerce. In its current form, the legislation requires the Homeland Security Department to produce an exhaustive probe and report into “digital content forgery technology” within one year of its enactment and annually thereafter for a half-decade. The bill defines digital content forgery as “the use of emerging technologies, including artificial intelligence and machine learning techniques, to fabricate or manipulate audio, visual, or text content with the intent to mislead.”
Ferraro noted that the inclusion of altered “text content” is “notable” in this instance.
“Manipulated text can pose an often overlooked danger, too, alongside photos, videos, and audio,” he explained. “Large-scale, AI-generated text can be used to manipulate social media conversations and infiltrate public notice-and-comment periods, implicating the regulatory functioning of government.”
The act calls for DHS-led examinations into the technologies that underlie deepfakes, descriptions of the various types of digital content forgeries, how foreign governments and their proxies are tapping into the tech to damage national security, the danger deepfakes present to individuals, methods to detect and mitigate such forgeries, and more.
Compared to the first federal law centered on the manipulated media passed in the 2019 NDAA—which focused largely on the foreign weaponization of deepfakes, and their use to target U.S. elections by foreign actors—Ferraro highlighted that the new amendment is notably broad in scope and ultimately casts a wider net. The inquiries it calls for are not limited to foreign actors’ activities.
“I think it is important to conceive of the challenges of deepfakes in an appropriately broad context, and I believe that the Deepfake Report Act moves in that direction,” Ferraro said. “Realistic forgeries can be used in many ways to contribute to such harms as social engineering, credential theft, business and insurance fraud, and falsified court evidence, among others.”
The attorney added that the new act demonstrates that there is ongoing bipartisan support for legislation promoting an enhanced understanding of deepfake-imposed threats. “In an era of bilious partisanship and legislative logjams, it is heartening to see Congress move with relative alacrity to address an emerging challenge,” he noted, adding that he also sees significance in the fact that this “would be the second deepfakes bill to be adopted by Congress through the omnibus yearly defense policy bill ... reflecting the national security challenge” of the digitally-altered media.
“Gathering evidence, assessing technology, and positioning the bureaucracy to address these emerging issues are important steps for legislation right now,” Ferraro said. “I expect we’ll see more legislation along these lines and perhaps regulatory action by federal bureaucracies going forward.”