The threat of manipulated multi-modal media – which includes audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene.
“At the intersection of media manipulation and social media lies the threat of disinformation designed to negatively influence viewers and stir unrest,” said Dr. Matt Turek, a program manager in DARPA’s Information Innovation Office (I2O). “While this sounds like a scary proposition, the truth is that not all media manipulations have the same real-world impact. The film industry has used sophisticated computer generated editing techniques for years to create compelling imagery and videos for entertainment purposes. More nefarious manipulated media has also been used to target reputations, the political process, and other key aspects of society. Determining how media content was created or altered, what reaction it’s trying to achieve, and who was responsible for it could help quickly determine if it should be deemed a serious threat or something more benign.”
While statistical detection techniques have been successful in uncovering some media manipulations, purely statistical methods are insufficient to address the rapid advancement of media generation and manipulation technologies. Fortunately, automated manipulation capabilities used to create falsified content often rely on data-driven approaches that require thousands of training examples, or more, and are prone to making semantic errors. These semantic failures provide an opportunity for the defenders to gain an advantage.
The Semantic Forensics (SemaFor) program seeks to develop technologies that make the automatic detection, attribution, and characterization of falsified media assets a reality. The goal of SemaFor is to develop a suite of semantic analysis algorithms that dramatically increase the burden on the creators of falsified media, making it exceedingly difficult for them to create compelling manipulated content that goes undetected.
To develop analysis algorithms for use across media modalities and at scale, the SemaFor program will create tools that, when used in conjunction, can help identify, deter, and understand falsified multi-modal media. SemaFor will focus on three specific types of algorithms: semantic detection, attribution, and characterization.
Semantic detection algorithms will determine if multi-modal media assets were generated or manipulated, while attribution algorithms will infer if the media originated from a purported organization or individual. Determining how the media was created, and by whom could help determine the broader motivations or rationale for its creation, as well as the skillsets at the falsifier’s disposal. Finally, characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes.
“There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact. The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes,” said Turek.
SemaFor will also develop technologies to enable human analysts to more efficiently review and prioritize manipulated media assets. This includes methods to integrate the quantitative assessments provided by the detection, attribution, and characterization algorithms to prioritize automatically media for review and response. To help provide an understandable explanation to analysts, SemaFor will also develop technologies for automatically assembling and curating the evidence provided by the detection, attribution, and characterization algorithms. Throughout the life of the program, the SemaFor technologies will be evaluated against a set of increasingly difficult challenge problems that are representative of new or emerging threat scenarios.
More information about the program is available in the Broad Agency Announcement that is posted on FedBizOpps.gov, https://go.usa.gov/xVkNd
# # #
Media with inquiries should contact DARPA Public Affairs at firstname.lastname@example.org
Associated images posted on www.darpa.mil and video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement, available here: http://go.usa.gov/cuTXR.
You are now leaving the DARPA.mil website that is under the control and
management of DARPA. The appearance of hyperlinks does not constitute
endorsement by DARPA of non-U.S. Government sites or the information,
products, or services contained therein. Although DARPA may or may not
use these sites as additional distribution channels for Department of
Defense information, it does not exercise editorial control over all of
the information that you may find at these locations. Such links are
provided consistent with the stated purpose of this website.
After reading this message, click to continue