How do AI-powered cyberattacks work?

AI has become a key technology in every enterprise IT toolbox — and it has also become a weapon in the arsenals of cybercriminals.

AI-powered cyberattacks leverage AI or machine learning (ML) algorithms and techniques to automate, accelerate, or enhance various phases of a cyberattack. This includes identifying vulnerabilities, deploying campaigns along identified attack vectors, advancing attack paths, establishing backdoors within systems, exfiltrating or tampering with data, and interfering with system operations.

Like all AI algorithms, the ones used by AI-powered cyberattacks can learn and evolve over time. This means that AI-enabled cyberattacks can adapt to avoid detection or create a pattern of attack that a security system can’t detect.

Screenshot-2024-02-21-at-1.00.48 AM

2024 CrowdStrike Global Threat Report

The 2024 Global Threat Report unveils an alarming rise in covert activity and a cyber threat landscape dominated by stealth. Data theft, cloud breaches, and malware-free attacks are on the rise. Read about how adversaries continue to adapt despite advancements in detection technology.

Download Now

Characteristics of AI-powered cyberattacks

AI-powered cyberattacks have five main characteristics:

  • Attack automation: Until very recently, most cyberattacks required significant hands-on support from a human adversary. However, growing access to AI- and generative AI-enabled tools is allowing adversaries to automate attack research and execution.
  • Efficient data gathering: The first phase of every cyberattack is reconnaissance. During this period, cyberattackers will search for targets, exploitable vulnerabilities, and assets that could be compromised. AI can automate or accelerate much of this legwork, enabling adversaries to drastically shorten the research phase and potentially improve the accuracy and completeness of their analysis.
  • Customization: One of the key capabilities of AI is data scraping, which is when information from public sources — such as social media sites and corporate websites — is gathered and analyzed. In the context of a cyberattack, this information can be used to create hyper-personalized, relevant, and timely messages that serve as the foundation for phishing attacks and other attacks that leverage social engineering techniques.
  • Reinforcement learning: AI algorithms learn and adapt in real time. In the same way that these tools continuously evolve to provide more accurate insights for corporate users, they also evolve to help adversaries improve their techniques or avoid detection.
  • Employee targeting: Similar to attack customization, AI can be used to identify individuals within an organization that are high-value targets. These are people who may have access to sensitive data or broad system access, may appear to have lower technological aptitude, or have close relationships with other key targets.

Types of AI-powered cyberattacks

There are multiple types of cyberattacks enables by AI and machine learning. Some include:

AI-driven social engineering attacks

AI-driven social engineering attacks leverage AI algorithms to assist in the research, creative concepting, or execution of a social engineering attack. A social engineering attack is any kind of cyberattack that aims to manipulate human behavior to fulfill a purpose, such as sharing sensitive data, transferring money or ownership of high-value items, or granting access to a system, application, database, or device.

In an AI-driven social engineering attack, an algorithm can be used to do the following:

  • Identify an ideal target, including both the overall corporate target and a person within the organization who can serve as a gateway to the IT environment
  • Develop a persona and corresponding online presence to carry out communication with the attack target
  • Develop a realistic and plausible scenario that would generate attention
  • Write personalized messages or create multimedia assets, such as audio recordings or video footage, to engage the target

AI-driven phishing attacks

AI-driven phishing attacks use generative AI to create highly personalized and realistic emails, SMS messages, phone communication, or social media outreach to achieve a desired result. In most cases, the goals of these attacks are the same as that of a social engineering attack: to access sensitive information, gain access to a system, receive funds, or prompt a user to install a malicious file on their device.

In advanced cases, AI can be used to automate the real-time communication used in phishing attacks. For example, AI-powered chatbots can support interactions that make them nearly indistinguishable from humans. Attackers can use these tools, deployed at scale, to attempt to connect with countless individuals simultaneously. In many cases, these chatbots pose as customer support or service agents in an attempt to gather personal information and account credentials, reset account passwords, or access a system or device.

Deepfakes

A deepfake is an AI-generated video, image, or audio file that is meant to deceive people. Deepfakes commonly appear on the internet for no other purpose than to entertain and confuse. However, they can also be used more maliciously as part of disinformation campaigns, “fake news,” smear campaigns of high-profile individuals, or cyberattacks.

In the context of a cyberattack, a deepfake is usually part of a social engineering campaign. For example, an attacker may use existing footage of a corporate leader or client to create a doctored voice recording or video footage. The tool can mimic the person’s voice and instruct a person to take a specific action, such as transferring funds, changing a password, or granting system access.

Adversarial AI/ML

Adversarial AI or adversarial ML is when an attacker aims to disrupt the performance or decrease the accuracy of AI/ML systems through manipulation or deliberate misinformation.

Attackers use several adversarial AI/ML techniques that target different areas of model development and operation. These include:

  • Poisoning attacks: Poisoning attacks target the AI/ML model training data, which is the information that the model uses to train the algorithm. In a poisoning attack, the adversary may inject fake or misleading information into the training dataset to compromise the model’s accuracy or objectivity.
  • Evasion attacks: Evasion attacks target an AI/ML model’s input data. These attacks apply subtle changes to the data that is shared with the model, causing it to be misclassified and negatively impacting the model’s predictive capabilities.
  • Model tampering: Model tampering targets the parameters or structure of a pre-trained AI/ML model. In these attacks, an adversary makes unauthorized alterations to the model to compromise its ability to create accurate outputs.

Malicious GPTs

A generative pre-trained transformer (GPT) is a type of AI model that can produce intelligent text in response to user prompts. A malicious GPT refers to an altered version of a GPT that produces harmful or deliberately misinformed outputs.

In the context of cyberattacks, a malicious GPT can generate attack vectors (such as malware) or supporting attack materials (such as fraudulent emails or fake online content) to advance an attack.

Ransomware attacks

AI-enabled ransomware is a type of ransomware that leverages AI to improve its performance or automate some aspects of the attack path.

For example, AI can be leveraged to research targets, identify system vulnerabilities, or encrypt data. AI can also be used to adapt and modify the ransomware files over time, making them more difficult to detect with cybersecurity tools.

crowdcast-threat-report-image

2023 Threat Hunting Report

In the 2023 Threat Hunting Report, CrowdStrike’s Counter Adversary Operations team exposes the latest adversary tradecraft and provides knowledge and insights to help stop breaches. 

Download Now

How to mitigate AI-powered cyberattacks

AI technology makes it potentially easier and faster for cybercriminals to carry out cyberattacks, effectively lowering the barrier to entry for some actors and increasing the level of sophistication of established players. AI-powered attacks are often more difficult to detect and prevent  than attacks that use traditional techniques and manual processes, making them a significant security threat to all companies.

In this section, we’ll offer recommendations across four key categories to protect and defend against AI-powered cyberattacks.

Continuously conduct security assessments

  • Deploy a comprehensive cybersecurity platform that offers continuous monitoring, intrusion detection, and endpoint protection.
  • Develop baselines for system activity and user behavior to serve as a standard of comparison for future activity and establish user and entity behavior analytics (UEBA). Ideally, this should be integrated with other environmental activity, such as endpoint and cloud environment activity.
  • Analyze systems for abnormal user activity or unexpected changes within the environment that may indicate an attack.
  • Implement real-time analysis of input and output data for your AI/ML system to protect against adversarial AI attacks.

Develop an incident response plan

An incident response plan is a document that outlines an organization’s procedures, steps, and responsibilities in the event of a cyberattack. This plan is based on four key areas, as defined by the National Institute of Standards and Technology (NIST):

  • Preparation: Develop a plan to help prevent security events and respond to attacks when they occur.
  • Detection and analysis: Confirm if a security event has occurred and, if one has, determine its severity and type.
  • Containment and eradication: Restrict system use and operation to limit the spread of the attack and its impact; execute remediation tactics to restore system use and patch any vulnerabilities.
  • Recovery: Coordinate the implementation of additional security measures to prevent similar attacks in the future and safeguard against a wider range of threats.

Employee awareness training

  • Add a module to the existing security training course that focuses specifically on AI-powered attacks.
  • Focus on how realistic and convincing AI-enabled attack techniques can be, particularly as it relates to social engineering techniques and deepfake chat and audio-based attacks.
  • To protect against adversarial AI attacks, train teams to recognize suspicious activity or outputs related to AI/ML-based systems.

Implement AI-powered solutions

Just as AI can be weaponized by cybercriminals, organizations can use it to counter AI-based attacks.

  • Adopt AI-native cybersecurity that enables organizations to leverage this technology to analyze vast datasets and identify patterns.
  • Leverage AI-enabled tools to automate security-related tasks, including monitoring, analysis, patching,  prevention, and remediation.
  • Develop system parameters that alert teams to high-risk activity and help them prioritize responses.

CrowdStrike’s AI-native platform

The CrowdStrike Falcon® platform is cybersecurity’s AI-native platform for the extended detection and response (XDR) era.

The Falcon platform drives the convergence of data, security, and IT with generative AI and workflow automation built in natively to stop breaches, reduce complexity, and lower costs.

How the Falcon platform uses the power of AI:

Want to learn how your organization can harness the power of AI to defend against the most sophisticated cyberattacks? Contact us now to learn more about the CrowdStrike Falcon platform and schedule a demo.

Schedule Demo

Lucia Stanham is a product marketing manager at CrowdStrike with a focus on endpoint protection (EDR/XDR) and AI in cybersecurity. She has been at CrowdStrike since June 2022.