Visible to the public "Adversarial AI Attacks Highlight Fundamental Security Issues"Conflict Detection Enabled

Artificial Intelligence (AI) and Machine Learning (ML) systems trained on real-world data are increasingly being seen as vulnerable to attacks involving unexpected inputs to fool the systems. For example, contestants at the recent Machine Learning Security Evasion Competition (MLSEC 2022) successfully modified celebrity photos in order to have them be recognized as different people while minimizing obvious changes to the original images. The most common methods included merging two images, similar to a deepfake, and inserting a smaller image inside the original's frame. In another case, researchers from MIT, the University of California at Berkeley, and FAR AI discovered that a professional-level Go AI could be easily defeated with moves that convinced the machine that the game had ended. While the Go AI could easily defeat a professional or amateur Go player using a logical set of movies, an adversarial attack could easily defeat the machine by making decisions that no rational player would typically make. According to Adam Gleave, a doctoral candidate in AI at the University of California, Berkeley, and one of the primary authors of the Go AI paper, although AI technology may work at superhuman levels and even be extensively tested in real-life scenarios, it remains vulnerable to unexpected inputs. When presented with anomalous or malicious inputs, systems that have been trained to be effective against real-world situations by being trained on real-world data and scenarios may behave erratically and insecurely. The issue spans applications and systems. According to Gary McGraw, a cybersecurity expert and co-founder of the Berryville Institute of Machine Learning (BIML), a self-driving car could handle nearly every situation that a normal driver might encounter on the road, but it would act catastrophically during an anomalous event or one caused by an attacker. The real challenge of ML, he adds, is figuring out how to be flexible and do things as they should be done normally while also reacting correctly when an anomalous event occurs. Since few ML model and AI system developers focus on adversarial attacks and use red teams to test their designs, finding ways to cause AI/ML systems to fail is relatively simple. MITRE, Microsoft, and other organizations called on businesses to take adversarial AI attacks more seriously, describing current attacks in the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS) knowledge base and noting that research into AI has skyrocketed, often with no robustness or security built in. This article continues to discuss the security issues emphasized by adversarial AI attacks.

Dark Reading reports "Adversarial AI Attacks Highlight Fundamental Security Issues"