Project Ire promises to use LLMs to detect whether code is malicious or benign
theregister.co.ukMicrosoft has rolled out an autonomous AI agent that it claims can detect malware without human assistance.
The prototype, called Project Ire, reverse engineers software "without any clues about its origin or purpose," and then determines if the code is malicious or benign, using large language models (LLM) and a bunch of callable reverse engineering and binary analysis tools.
"It was the first reverse engineer at Microsoft, human or machine, to author a conviction case — a detection strong enough to justify automatic blocking — for a specific advanced persistent threat (APT) malware sample, which has since been identified and blocked by Microsoft Defender," Redmond claimed in a Tuesday blog post.
If it performs as promised, and at scale, Project Ire will help relieve security analysts of the tedious work of manually analyzing every sample and classifying it as either good or bad. This can take hours, leading to alert fatigue and ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE