A growing number of cybersecurity vendors are integrating large-language model-based (LLM) tools into their offerings. Many are opting to use OpenAI’s GPT model.
Microsoft launched its GPT-4-powered Security Copilot in March and in April Recorded Future added a new research feature using OpenAI’s model trained on 40,000 threat intelligence data points.
Software supply chain security provider OX Security followed in May, Security Service Edge (SSE) platform provider Netskope and email security developer Ironscales announced GPT-powered functionalities during Infosecurity Europe in June.
Many other vendors are looking to levering LLMs as well. During Infosecurity Europe, Mayur Upadhyaya, CEO of API security provider Contxt told Infosecurity that his company had “secured an innovation grant in 2021, before the emergence of foundational models, to build a machine learning model for personal data detection, with a proprietary dataset. We are now trying to see how we can leverage foundational models with this dataset.”
Non-Deterministic AI Algorithms
LLMs are not the first type of AI that’s been integrated into cybersecurity products, with many Infosecurity Europe exhibitors – the likes of BlackBerry Cyber Security’s Cylance AI, Darktrace, Ironscales and Egress – leveraging AI in their products.
However, although it’s difficult to say what AI algorithms cybersecurity vendors have used, they are very likely deterministic.
Jack Chapman, VP of threat intelligence at Egress, told Infosecurity that his company was using “genetic programming, behavioral analytics-based algorithms, as well as social graphs.”
Ronnen Brunner, SVP of International Sales at Ironscales, said during his presentation at Infosecurity Europe that his firm was using “a broad range of algorithms, including some leveraging natural language processing (NLP), but not LLMs yet.”
According to Nicolas Ruff, a senior software engineer at Google, most AI algorithms used in cybersecurity are classifiers, a type of machine learning algorithm used to assign a class label to a data input.
These and all the above-mentioned machine learning models differ from LLMs and other generative AI models because they work in a closed loop and have built-on restrictions.
LLMs have been built on massive training sets. They’re also designed to guess the most probable words following a given prompt. These two features make them probabilistic and not deterministic – which means they provide the most probable answer, not necessarily the right one.
Just Another Tool in the Toolbox
Current general-purpose LLMs tend to hallucinate, which means they will give a convincing response but one that is entirely wrong.
Speaking to Infosecurity during Infosecurity Europe, Jon France, CISO of the non-profit (ISC)2, acknowledged that this makes current LLMs a risky tool for cybersecurity practices, where accuracy and precision are critical.
“LLMs can still be useful for various security purposes, like crafting security policies for everyone to understand,” he added.
Ganesh Chellappa, the head of support services at ManageEngine, agreed: “Anyone who has been using any user and entity behavior analytics (UEBA) solutions for many years has a huge amount of data that is just sitting there that they were never able to use. Now that LLMs are here, it’s not even a question; we must try and leverage them to make use of this data.”
Meanwhile, Chapman argued: “They can also be helpful for cybersecurity practitioners as a data pre-processing tool in areas such as anomaly detection (email security, endpoint protection…) or threat intelligence.”
At this stage of development, France and Chapman insisted that the key thing to remember in using LLMs in cybersecurity is “to consider them as another tool in the toolbox – and one that should never be responsible for executive tasks.”
Open Source LLMs
According to Chellappa, the hallucination concerns will largely be solved when cybersecurity firms develop their own models from open source frameworks like Meta’s LLaMA or Stanford University’s Alpaca and use them to train their own datasets.
However, SoSafe’s CEO, Dr. Niklas Hellemann, warned that the open source models won’t solve another growing issue LLM-based tools face: model poisoning.
Model poisoning refers to hacking techniques where an adversary can inject bad data into your model’s training pool and get it to learn something it shouldn’t.
“Open source models like LLaMA are already targeted with these attacks,” Hellemann told Infosecurity.
Credit: Source link
Comments are closed.