Analyzing the Current State of AI Use in Malware
released on 2026-03-19 @ 03:13:44 PM
Unit 42 researchers investigated the use of large language models (LLMs) in malware creation and functionality. They examined two samples: a .NET infostealer incorporating OpenAI's GPT-3.5-Turbo model via API, and a Golang-based malware dropper leveraging an LLM for environment assessment. The infostealer's LLM integration was poorly implemented and non-functional, serving as 'AI theater'. The dropper used an LLM to evaluate system safety before deploying its payload. While these samples show experimentation with AI in malware, they highlight challenges in effective implementation. The researchers anticipate future advancements in AI-assisted malware creation and execution, emphasizing the need for evolved defenses against AI-driven threats.