Tech »  Topic »  LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code

LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code


By Mayura Kathir

LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code.

Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.”

Our methodology also uncovered other offensive LLM applications, including people search agents, red team benchmarking utilities and LLM-assisted code vulnerability injection tools.

As Large Language Models (LLMs) become integral to development workflows, adversaries are adapting these systems to dynamically produce malicious payloads.

SentinelLABS research identified LLM-enabled malware through pattern matching against embedded API keys and specific prompt structures.

Traditional malware ships its attack logic in static binaries, but LLM-embedded malware retrieves and executes code on demand. Static signature-based defenses struggle against this approach because each invocation of an LLM may yield unique code patterns.

Dynamic analysis likewise faces challenges when malicious ...


Copyright of this story solely belongs to gbhackers . To see the full text click HERE