Responsible Artificial Intelligence (RAI) Toolkit DAGR and SHIELD

shield
Source: Chief Digital and Artificial Intelligence Office

Presented: November 5, 2024 12:00 pm
Presented by: Andrew Brooks

The Responsible Artificial Intelligence (RAI) Toolkit provides a voluntary process to identify, track, and improve alignment of AI projects to RAI best practices and the U.S. Department of Defense’s (DoD’s) AI ethical principles while capitalizing on opportunities for innovation.  It is intended to enable personnel working throughout an AI system’s life cycle to assess the system’s alignment with the DoD’s AI ethical principles and address any concerns identified via that assessment.

The RAI Toolkit is built around the SHIELD assessment, an acronym for the following six sequential activities that make up the core RAI activities on a particular project:

  1. Set Foundations – Identify the relevant RAI, ethical, legal, and policy foundations for a project, along with potential issues (statements of concern [SOCs]) and opportunities.
  2. Hone Operationalizations – Operationalize the foundations and SOCs into concrete assessment.
  3. Improve and Innovate – Leverage mitigation tools to make progress toward meeting the foundations and addressing the SOCs.
  4. Evaluate Status – Evaluate the extent to which the foundations are being met and the SOCs are being addressed.
  5. Log for Traceability – Document to ensure traceability.
  6. Detect via Continuous Monitoring – Continuously monitor system for any degradations in performance.

The Toolkit also includes a DoD-specific risk assessment resource – the DoD AI Guide on Risk (DAGR).

 

 

Computer Icon

Host a Webinar with CSIAC

Are you interested in delivering a webinar presentation on your DoD research and engineering efforts?

Want to find out more about this topic?

Request a FREE Technical Inquiry!