Breadcrumb

  1. Home
  2. News
  3. DARPA Exploring Ways To Assess Ethics For Autonomous Weapons

DARPA exploring ways to assess ethics for autonomous weapons

 

Via a variety of approaches, ASIMOV program research performers will evaluate the ability of autonomous weapons systems to follow human ethical norms

Dec 19, 2024

The Autonomy Standards and Ideals with Military Operational Values (ASIMOV) program aims to develop benchmarks to objectively and quantitatively measure the ethical difficulty of future autonomy use cases and readiness of autonomous systems to perform in those use cases within the context of military operational values. DARPA has awarded seven contracts to an array of research performers, each of which explores a different approach to addressing this challenge.

What it’s not:

  • The program is not building weapons
  • It is not developing autonomous systems
  • It is not designing algorithms for autonomous systems
  • It is not developing standards and ideals for ALL autonomous systems (its focus is on autonomous weapons systems)
  • It is not focused on developing qualitative benchmarks

What it is:

ASIMOV is attempting to tackle one of the chief concerns of its namesake, author Isaac Asimov: the ability of autonomous systems to follow human ethical norms. Asimov was a writer (and scientist) deeply concerned with exploring the unintended consequences of technology. He’s famous for the “Three Laws of Robotics,” introduced in 1942, which outline a simple, foundational ethic for robots. Much of his fiction explores the limitations and edge cases which effectively “break” the intentions of those laws, often with disastrous consequences for humans.

The challenges and opportunities Asimov predicted in his writing remain poignant today. The rapid development and impending ubiquity of autonomy and artificial intelligence (AI) technologies across both civilian and military applications require a robust and quantitative framework to measure and evaluate not only the technical, but, perhaps more importantly, the ethical ability of autonomous systems to follow human expectations. To that end, ASIMOV is tackling this challenge through the development and virtual demonstration of quantitative autonomy benchmarks.

"We don’t know if what we’re trying to do is even possible, but we know evaluating the ability of autonomous weapons systems to comply with human ethical norms is a conversation we have to have – the sooner, the better,” says program manager Dr. T. J. Klausutis. “What we’re doing is wildly aspirational. Through ASIMOV, DARPA intends to lead the national conversation around the ethics of autonomous weapons systems.”

Approach and performers:

The ASIMOV program is striving to create the ethical autonomy common language to enable the developmental testing/operational testing (DT/OT) community to meaningfully evaluate the ethical difficulty of specific military scenarios and the ability of autonomous systems to perform ethically within those scenarios.

The seven performers on contract for ASIMOV are exploring multiple theoretical frameworks, as well as quantifiability and safety and assurance. The performers are: CoVar, LLC; Kitware, Inc.; Lockheed Martin; RTX Technology Research Center; SAAB, Inc.; Systems & Technology Research, LLC; and the University of New South Wales.

ASIMOV performers are developing prototype generative modeling environments to rapidly explore scenario iterations and variability across a spectrum of increasing ethical difficulties. ASIMOV aims to build the foundation for defining the benchmark with which future autonomous systems may be evaluated.

ELSI and beyond:

ASIMOV defines the term "military operational values" as the principles, standards, or qualities that are considered important and guide the actions and decisions of military personnel during operational activities. Adherence to the commander's intent is a key facet of ASIMOV's development. Nonetheless, DARPA envisions that the quantitative approach ASIMOV strives to achieve will have a broader impact throughout the autonomy community.

The program is being guided by a robust advisory group of diverse stakeholders to rigorously consider the ethical, legal, and societal implications (ELSI) of its work.

“Ethics are hard. Quantifying ethics is even harder,” Klausutis says. “ASIMOV is tackling a tremendously complex problem with an infinite set of variables. We don’t have any illusions we’ll figure everything out we want to in the initial stages of this program, but the stakes are too high not to try everything we can.”

Public visibility and involvement:

ASIMOV is designed to help inform a national and global conversation. The work being done will be public and the tools that will eventually be developed are intended to be open to the world for testing and utilization.

“We may not achieve all of the goals of the program, but minimally we believe ASIMOV will lead the community forward in reaching consensus and building essential benchmarks,” Klausutis says. “Our initial work will help the community refine its goals, identify the next set of questions to ask, and accelerate a productive, grounded conversation to serve as a foundation for ensuring autonomous weapons systems can be quantitatively evaluated for their ability to comply with human ethical values.”

DARPA is building a public community of interest to help provide insight; we will keep the community informed of our progress. If you have expertise in any of the areas addressed by this program and would like to stay informed, please contact us at: ASIMOVTechnicalCommunityPanel@darpa.mil.

 

Contact