TL;DR

Attackers leverage AI to automate phishing, develop evasive malware, and find exploitable systems. Traditional security systems can't keep pace with these adaptive threats, making AI-powered security solutions essential for effective detection and response to protect organizations.

Introduction

Cybercriminals are already using AI to automate attacks, making phishing, malware development, and reconnaissance more effective at scale. AI allows attackers to craft highly targeted phishing emails, generate evasive malware, and find exploitable targets faster than traditional security tools can respond. Relying on outdated security measures leaves organizations at risk as static defenses struggle to keep up with rapidly evolving AI-driven threats. This article discusses how attackers leverage AI, where conventional defenses fall short, and why AI-powered security is the only effective response.

How Attackers Are Using AI for Cyber Offense

Examples Observed by Microsoft and OpenAI

In February 2024, Microsoft Threat Intelligence released a report citing examples of how threat actors were using ChatGPT to increase their productivity. The table below lists observed examples of different types of LLM-assisted or -aided tactics from the MITRE ATT&CK framework.

Tactic Observed Example
LLM-informed reconnaissance Understanding satellite communication protocols, radar imaging technologies, and specific technical parameters.
LLM-optimized payload crafting Creating and refining payloads for deployment in cyberattacks.
LLM-aided development Development of tools and programs, including those with malicious intent, such as malware.
LLM-directed security feature bypass Finding ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
LLM-enhanced scripting techniques Generating code snippets that appear intended to support app and web development, interactions with remote servers, web scraping, executing tasks when users sign in, and sending information from a system via email.
LLM-refined operational command techniques Utilizing LLMs for advanced commands, deeper system access, and control representative of post-compromise behavior.
LLM-enhanced anomaly detection evasion Developing code to evade detection, learning how to disable antivirus via registry or Windows policies, and delete files in a directory after an application has been closed.

How Attackers Use AI to Speed Up Their Operations

Attackers continue to make use of LLMs to speed up their operations. An October 2024 report from OpenAI details how attackers use AI to automate reconnaissance, scan for vulnerabilities, and develop malware that bypass security measures. AI tools help attackers refine their code, debug errors, and obfuscate scripts to avoid detection. 

Some groups, like SweetSpecter and CyberAv3ngers, have been observed using AI to fine-tune their attack methods and evade security tools, according to the OpenAI report. AI also speeds up the process of discovering and exploiting software vulnerabilities, allowing attackers to develop new attack techniques faster than defenders can patch them.

AI’s Role in Influence Operations & Misinformation Campaigns

AI is also being used to manipulate public opinion on a massive scale. OpenAI’s latest research found that AI-powered botnets generate and spread misinformation across social media, news websites, and other online platforms, according to OpenAI. One example is STORM-2035, a state-backed campaign that automates social media manipulation to push false narratives and influence public perception. 

AI can generate fake news articles, produce realistic deepfake videos, and automate thousands of social media accounts to amplify misleading content. These operations are becoming more sophisticated, making it increasingly difficult to tell what’s real and artificially generated.

AI-Generated Spear Phishing at Scale

AI is making phishing attacks more convincing and harder to detect. A Harvard study found that AI-generated phishing emails perform nearly as well as expertly crafted ones, with click-through rates between 54% and 56% according to a November 2024 study from Harvard University with security expert Bruce Schneier as a co-author.

Attackers can now generate personalized phishing emails at scale, mimicking natural communication styles and tailoring messages based on publicly available information. What makes this even more concerning is how AI-driven phishing campaigns can adapt in real-time. If a target doesn’t engage, the AI can tweak the message and try again with a different approach. This makes phishing far more effective and traditional defenses less reliable.

Why Traditional Defenses Are No Longer Enough

AI-Powered Attacks Are Faster, More Adaptable, and Harder to Detect

Attackers use AI to speed up every stage of an attack, from scanning for vulnerabilities to executing code and evading detection. AI-driven tools allow adversaries to map out targets quickly, identify weak points, and generate new attack strategies within minutes. Unlike traditional malware, AI-generated threats don’t rely on static signatures or predefined tactics. 

Instead, they adjust in real-time, making them difficult to catch with traditional security tools that depend on known threat patterns. Signature-based detection struggles to keep up because AI-generated attacks can change their behavior faster than new signatures can be created.

AI-Generated Malware and Code Obfuscation Bypass Security

Attackers are now using AI to modify malware in real time, making it harder to detect. STORM-0817, an advanced threat group, has been observed using AI to develop polymorphic malware that continuously changes its code structure to evade security tools, according to the OpenAI report. 

This kind of malware can generate new variations of itself every time it runs, making signature-based defenses almost useless. AI is also speeding up the development of zero-day exploits, allowing attackers to create and deploy them before security researchers even know they exist. The result is a widening gap between attack speed and defenders' response time.

SOAR and Traditional Playbooks Struggle Against AI-Driven Attacks

Many organizations rely on SOAR (Security Orchestration, Automation, and Response) platforms to automate responses to security incidents. While SOAR effectively handles predictable threats, it relies on predefined workflows that struggle with highly adaptive AI-driven attacks. 

AI-generated threats don’t follow fixed patterns, making it difficult for traditional automation playbooks to recognize and respond quickly. In addition, security teams still need human analysts to investigate and adapt responses manually, slowing detection and increasing the risk of a breach.

Why Enterprises Must Fight AI with AI

Why Enterprises Must Fight AI with AI

Attackers use AI to automate reconnaissance, create highly targeted phishing campaigns, and develop malware that evolves in real-time. Defenders need AI-powered security to keep pace. AI SOC analysts automate security alert investigations, reduce false positives, and give security teams the ability to focus on real threats instead of sorting through endless low-priority alerts. 

They pull data from SIEMs, EDRs, and other business systems to detect unusual activity, check authentication patterns for anomalies, and analyze permissions to spot unauthorized privilege escalations. They also engage with users directly to confirm whether suspicious actions were legitimate and cross-reference threat intelligence feeds to assess real-time risks.

Security teams often struggle with the volume of alerts and the complexity of manual investigations. AI SOC analysts streamline this process by automating deep investigations that traditionally required hours of analyst effort. 

They can deobfuscate scripts, analyze process execution, and correlate attack patterns across multiple sources without human intervention. This means security teams can shift their focus from reacting to alerts to actively improving security operations and stopping real threats faster.

Integrating AI SOC Analysts with Existing Security Workflows

AI SOC analysts work alongside SIEM, SOAR, and XDR solutions, filling the gap between detection and response. While SIEM collects logs and SOAR automates pre-defined workflows, AI SOC analysts dynamically investigate incidents, adapting their response based on contextual data fetched in real time from business and security systems. One of the benefits of an AI SOC analyst deployment is to reduce mean-time-to-conclusion (MTTC), or the time it takes for a SOC to acknowledge and investigate an alert.

With reasoning capabilities, they don’t follow rigid playbooks so can identify and analyze new attack techniques without constant updates. This allows security teams to reduce manual workload and improve detection accuracy without overhauling their infrastructure.

Conclusion

AI gives attackers a massive advantage by enabling scalable, adaptive, and evasive attacks. Relying on traditional security tools that follow static rules and predefined workflows is no longer enough to stop these threats. Organizations need AI-powered security solutions to investigate, adapt, and respond in real time to defend against AI-driven cybercrime. Dropzone AI SOC analysts provide an intelligent, automated approach to security operations, reducing alert fatigue, detecting advanced threats, and enabling rapid response. Learn how Dropzone AI can help you fight AI-driven threats with AI-powered security operations. See how Dropzone AI can protect your organization—schedule a demo today!

FAQ

How are hackers using AI in cyberattacks?

Hackers use AI to automate phishing campaigns, generate malware, and identify exploitable devices and applications with unprecedented speed and accuracy. AI allows attackers to craft convincing phishing emails, create malware that adapts to evade detection, and exploit vulnerabilities faster than security teams can respond. AI-powered disinformation campaigns are also used to manipulate social media and spread misinformation.

Can AI be used to defend against AI-powered cyber threats?

Yes, AI SOC analysts are designed to counter AI-driven attacks by automating threat detection, investigating alerts, and prioritizing real security risks in real time. AI can query and analyze vast amounts of security data faster than humans, identify attack patterns, and initiate response actions before damage occurs. AI SOC analysts also improve by learning from investigations, making security operations more efficient and adaptive.

Why is traditional cybersecurity insufficient against AI-powered threats?

Traditional security tools depend on static rules and predefined signatures, which makes them ineffective against AI-powered attacks that can rapidly evolve and avoid detection. Attackers are leveraging AI to generate new malware variants, bypass email security filters, and mimic human behaviors in phishing campaigns. Defenses relying only on rule-based automation struggle to detect AI-driven threats, making AI-powered security necessary.

How can enterprises integrate AI into their cybersecurity strategy?

Enterprises can incorporate AI SOC analysts into their security stack, allowing them to work alongside SIEM, SOAR, and XDR solutions to enhance detection and response. AI SOC analysts can automate Tier 1 security alert investigations, escalate high-risk threats to human analysts, and execute response actions through SOAR workflows. By integrating AI-driven security operations, organizations can significantly improve response times, reduce false positives, and strengthen their defense against AI-powered attacks.

Tyson Supasatit
Principal Product Marketing Manager

Tyson Supasatit is Principal Product Marketing Manager at Dropzone AI where he helps cybersecurity defenders understand what is possible with AI agents. Previously, Tyson worked at companies in the supply chain, cloud, endpoint, and network security markets. Connect with Tyson on Mastodon at https://infosec.exchange/@tsupasat