Breadcrumb

  1. Home
  2. Research
  3. Programs
  4. RED: Reverse Engineering of Deceptions

RED: Reverse Engineering of Deceptions

 

Program Summary

Machine Learning (ML) techniques are susceptible to adversarial deception at training time and when deployed. Similarly, humans are susceptible to being deceived by falsified media (images, video, audio, text) or other information created with malicious intent. The consequences may be significant in both cases, and deception plays an increasingly central role in information-based attacks. The Reverse Engineering of Deceptions (RED) effort aims to develop techniques that automatically reverse engineer the toolchains behind attacks, such as multimedia falsification, adversarial ML attacks, or other information deception attacks. The tools by which those attacks are accomplished and the adversaries behind such attacks are often unclear. Recovering the tools and processes used to create an attack provides information that may aid in identifying an adversary. RED seeks to develop techniques that support the automated identification of attack toolchains as well as the development and maintenance of scalable databases of attack toolchains.

For more information, see the RED Program Announcement.

Resources

Contact