Breadcrumb

  1. Home
  2. Research
  3. Programs
  4. CSL: Cooperative Secure Learning

CSL: Cooperative Secure Learning

 

Program Summary

Machine Learning (ML) requires vast amounts of data, but often the data sets that enrich the models are owned by different parties and protected by privacy, security, trade secrets, or regulatory requirements. Likewise, the applied ML models (e.g., classifiers) are often owned by different parties and may be proprietary, requiring stringent protection to reduce the threat of exposure for the input data and modeling results. Due to these limitations, organizations in the government and private sector are unable to cooperate fully in model training and development to gain the best performance of ML systems.

The Cooperative Secure Learning (CSL) effort aims to develop methods to protect data, models, and model outputs among a community of entities desiring to securely share their information to better inform ML model development. CSL seeks to enable multiple parties to cooperate for the purpose of improving each other’s ML models while assuring that each entity’s individual, pre-existing datasets and models will remain private. This effort will focus on developing working prototypes of computational techniques for improving ML models, and provide insights and methods that support privacy preservation and data security. Underlying algorithms will be evaluated based on their accuracy and privacy as well as their computational feasibility. Possible technical approaches can draw upon cryptographic methods (e.g., secure multiparty computation, homomorphic encryption, etc.,) differential privacy, and other methodologies. If successful, the CSL effort will significantly expand multi-organization ML capabilities, leading to the construction of better informed and more robust ML models without compromising privacy.

Additional information is available in the CSL Program Announcement.

Contact