Developing Algorithms That Make Decisions Aligned With Human Experts

Home / Articles / External / Government

Source: DARPA
Source: DARPA

March 30, 2022 | Originally published by DARPA on March 3, 2022

Military operations – from combat to medical triage to disaster relief – require complex and rapid decision-making in dynamic situations where there is often no single right answer. Two seasoned military leaders facing the same scenario on the battlefield, for example, may make different tactical decisions when faced with difficult options. As artificial intelligence (AI) systems become more advanced in teaming with humans, building appropriate human trust in the AI’s abilities to make sound decisions is vital. Capturing the key characteristics underlying expert human decision-making in dynamic settings and computationally representing that data in algorithmic decision-makers may be an essential element to ensure algorithms would make trustworthy choices under difficult circumstances.

DARPA announced the In the Moment (ITM) program, which seeks to quantify the alignment of algorithms with trusted human decision-makers in difficult domains where there is no agreed-upon right answer. ITM aims to evaluate and build trusted algorithmic decision-makers for mission-critical U.S. Department of Defense (DoD) operations.