- The goal of the Secure AI project is to fortify the security of AI-enabled systems and address the unique vulnerabilities and novel adversary attacks they face
- Its results were used to expand MITRE ATLAS®, a comprehensive knowledge base of adversary tactics and techniques targeting AI systems
- As a cybersecurity industry leader and a Center for Threat-Informed Defense Research Partner, CrowdStrike provided valuable expertise to drive the success of the Secure AI project
As organizations deploy more AI-enabled systems across their networks, adversaries are taking note and using sophisticated new tactics, techniques and procedures (TTPs) against them.
The need for continued innovation to fight these threats is paramount. MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), launched in 2021, is a framework modeled after the globally acclaimed MITRE ATT&CK® matrix to capture and define TTPs employed against AI-enabled systems, including those based on large language models (LLMs).
The MITRE Center for Threat-Informed Defense’s Secure AI project was launched to enhance the ATLAS framework to include the latest TTPs and vulnerabilities affecting AI-enabled cybersecurity solutions. As part of this, the Center recently launched the AI Incident Sharing Initiative, an industry resource to improve awareness of threats to AI systems by enabling contributors to receive and share anonymized data on real-world incidents impacting AI systems. Further, the Secure AI project has documented case studies of system vulnerabilities observed by industry partners as well as mitigations to address and document AI-related incidents.
As a Research Partner with the Center for Threat-Informed Defense, CrowdStrike contributed to the Secure AI project by providing expertise and anonymized information about incidents observed in real-world adversarial attacks.
The Secure AI Project
AI-enabled cybersecurity solutions face the same cyber threats as traditional systems, but they must also defend against a new class of sophisticated attacks designed to exploit the unique characteristics of AI.
The nature of AI systems introduces new vulnerabilities for threat actors to exploit, leading to damaging and costly cyberattacks. For example, as part of the Secure AI project, one case study known as the “ShadowRay AI Infrastructure Data Leak” recorded unknown attackers exploiting a previously unknown vulnerability to access Ray production clusters and use them to mine cryptocurrency undetected for seven months — at a user cost for hijacked computers and compute time estimated to be nearly $1 billion USD.
Participants in the Secure AI Project identified four new techniques to add to the ATLAS framework:
- Acquire Infrastructure: Domains — Tactic: Resource Development
- Erode Database Integrity — Tactic: Impact
- Discover LLM Hallucinations
- Publish Hallucinated Entities
In addition, researchers also identified a series of cutting-edge threats to AI-enabled systems:
- Privacy/Membership inference attacks
- Large Language Model (LLM) behavior modification
- LLM jailbreaking
- Tensor steganography
New mitigations were added to ATLAS as well, ensuring that organizations are not only aware of the latest threats targeting AI-enabled systems, they can also take measures to prevent those threats from succeeding. Mitigations identified as part of the Secure AI project include:
- Generative AI Model Alignment
- Guardrails for Generative AI
- Guidelines for Generative AI
- AI Bill of Materials
- AI Telemetry Logging
Collaboration Is Key to Research and Innovation
CrowdStrike’s commitment to cybersecurity research and innovation can be seen in the best-in-class protection provided by the AI-native CrowdStrike Falcon platform.
And we feel our responsibility in helping to defend against cybersecurity threats extends beyond providing our customers with cutting-edge protection. Our researchers and data scientists regularly publish their latest findings, sharing valuable information in an effort to improve defenses against adversarial attacks. Our team also regularly collaborates with the Center for Threat-Informed Defense, and the Secure AI project is the latest example of this partnership. This project focused on identifying and countering the unique threats faced by the latest AI-enabled systems through the enhancement of MITRE’s ATLAS framework.
Every time we work with the Center for Threat-Informed Defense, it is in pursuit of the publicly funded Center’s mission: “To advance the state of the art and the state of the practice in threat-informed defense globally.”
You can read more about the Secure AI project here.
Additional Resources
- Learn more about CrowdStrike’s work designing Charlotte AI, our conversational AI assistant: Deploying the Droids: Optimizing Charlotte AI’s Performance with a Multi-AI Architecture.
- Download the white paper: Applying the Best AI for the Job: Inside Charlotte AI’s Multi-AI Architecture.
- Visit the Charlotte AI product page to learn how Charlotte AI accelerates security operations and helps your entire team become faster, better, and smarter.
- Test CrowdStrike next-generation antivirus for yourself. Start your free trial of CrowdStrike® Falcon Prevent™ today.