Breadcrumb

  1. Home
  2. Research
  3. Programs
  4. ARCOS: Automated Rapid Certification Of Software

ARCOS: Automated Rapid Certification Of Software

 

Program Summary

The process of determining that a software system’s risk is acceptable is referred to as “certification.” Current certification practices within the Department of Defense (DoD) are antiquated and unable to scale with the amount of software deployed. Two factors prevent scaling: (a) the use of human evaluators to determine if the system meets certification criteria, and (b) the lack of a principled means to decompose evaluations.

The amount of assurance evidence needed to determine software conformance to certification can be overwhelming to human subject matter experts, resulting in superficial, incomplete, and/or unacceptably long evaluations. Human evaluators also have unique expertise, experience, and biases that influence their approach to evaluations. Because certification requirements may be vague or poorly written, evaluators often must interpret what is intended. Combined, these factors result in inconsistencies over time and across evaluations.

Currently, there is no means to compose evaluations in a principled and trustworthy manner. Composed evaluations would allow subsystems or components to be evaluated independently. The results of those independent evaluations could then be leveraged as assurance evidence in the composed systems. This would amortize the effort of evaluating any component over all systems using that component. Current practice requires re-evaluating components and their assurance evidence in every system that employs them. The inability to use a divide–and-conquer approach to certification of large systems increases the cost and time required to perform these certifications.

The goal of the Automated Rapid Certification Of Software (ARCOS) program is to automate the evaluation of software assurance evidence to enable certifiers to determine rapidly that system risk is acceptable. Two factors support the acceleration of software certification through the automation of evaluations. First, the DoD has articulated its intentions to have its contractors modernize their engineering processes in the DoD Digital Engineering Strategy. The goal of this strategy is to move away from document-based engineering processes and towards design models that are to be the authoritative source of truth for systems. Such a future does not lend itself to current certification practices, but it will facilitate the automated evaluation of assurance. Second, advances in several technologies provide a basis for confidence that automated evaluation of assurance evidence to support certification is possible. Model-based design technology, including probabilistic model checking, may enable reasoning over a design in a way that quantifies uncertainty. So-called “Big Code” analytics have pioneered the application of semantic-based analytics to software and its associated artifacts. Mathematically rigorous analysis and verification provide the ability to develop software implementations that are demonstrably correct and sound. Assurance case languages provide us a means for expressing arguments on how software fulfills its certification goals, in a manner that is machine-readable.

ARCOS will explore techniques for automating the evidence generation process for new and legacy software; create a means of curating evidence while maintaining its provenance; and develop technologies for the automated construction of assurance cases, as well as technologies that can validate and assess the confidence of an assurance case argument. The evidence generation, curation, and assessment technologies will form the ARCOS tools and processes, working collectively to provide a scalable means of accelerating the pathway to certification.

For more information please visit the Automated Rapid Certification of Software (ARCOS) Proposers Day webpage available here.

For a video of the ARCOS Proposers Day presentations, click here.

 

Contact