Breadcrumb

  1. Home
  2. Research
  3. Programs
  4. SemaFor: Semantic Forensics

SemaFor: Semantic Forensics

 

Program Summary

Media generation and manipulation technologies are advancing rapidly and purely statistical detection methods are quickly becoming insufficient for identifying falsified media assets. Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources (algorithm development, data, or compute). However, existing automated media generation and manipulation algorithms are heavily reliant on purely data driven approaches and are prone to making semantic errors. For example, generative adversarial network (GAN)-generated faces may have semantic inconsistencies such as mismatched earrings. These semantic failures provide an opportunity for defenders to gain an asymmetric advantage. A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.

The Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms, which will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help detect, attribute, and characterize adversary disinformation campaigns.

To support SemaFor technology transition, DARPA launched two new efforts to help the broader community continue the momentum of defense against manipulated media.

The first comprises an analytic catalog containing open-source resources developed under SemaFor for use by researchers and industry. As capabilities mature and become available, they will be added to this repository.

The second comprises an open community research effort called AI Forensics Open Research Challenge Evaluation (AI FORCE), which aims to develop innovative and robust machine learning, or deep learning, models that can accurately detect synthetic AI-generated images. Through a series of mini challenges, AI FORCE asks participants to build models that can discern between authentic images, including ones that may have been manipulated or edited using non-AI methods, and fully synthetic AI-generated images.

Visit SemanticForensics.com for additional details.

Contact