MITRE and Microsoft have teamed up with other organizations in developing a framework to identity, respond to, and stop attacks on machine learning (ML) systems. Attacks on ML systems have increased markedly recently, yet organizations have taken few steps to secure them. The Adversarial ML Threat Matrix is an industry-focused open framework designed to protect ML from attackers.
- A Microsoft survey found that most businesses (25 out of 28) do not have the right tools to secure their ML systems.
- Gartner predicts that by 2022, 30% of artificial intelligence (AI) cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack ML and other AI-powered systems.
- Earlier this year, the Software Engineering Institute's CERT warned that many ML systems are vulnerable to arbitrary misclassification attacks that could compromise their confidentiality, integrity, and availability.