DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses

Home / Articles / External / Government

Source: DARPA
Source: DARPA

January 4, 2022 | Originally published by DARPA on December 21, 2021

There are many inherent weaknesses that underlie existing machine-learning (ML) models, opening the technology up to spoofing, corruption, and other forms of deception. Attacks on artificial intelligence (AI) algorithms could result in a range of negative effects – from altering a content recommendation engine to disrupting the operation of a self-driving vehicle. As ML models become increasingly integrated into critical infrastructure and systems, these vulnerabilities become even more worrisome. DARPA’s Guaranteeing AI Robustness against Deception (GARD) program is focused on getting ahead of this safety challenge by developing a new generation of defenses against adversarial attacks on ML models.