Today, the dominant industry approach to artificial intelligence (AI) is to tack specialized automated reasoning (AR) components onto a large language model (LLM) or other similar machine learning (ML) system. These ML-centric systems typically have weak assurance; the “tack-on” approach is an importantly limited way of providing assurance or safeguards.
The Compositional Learning-and-Reasoning for AI Complex Systems Engineering (CLARA) fundamental research program is designed to tightly integrate AR and ML components to create high-assurance AI — which is expected to scale even to complex systems of systems. Integrating the two different branches of AI will provide the speed and flexibility of ML with verifiability based on AR proofs that have strong logical explainability and computational tractability.
In more detail, CLARA is anticipated to create powerful methods for the hierarchical, fine-grained, highly transparent composition of important kinds of ML and AR components, including Bayesian, neural nets, and logic programs.
CLARA aims to create a theory-driven algorithmic, highly reusable, scalable foundation for high assurance plus broad applicability, useful for many crucial defense and commercial realms which may include, but is not limited to: kill web, supply chain & logistics, and wargaming; autonomous and command & control; medical, financial, and legal; and science and tech design.