The security and privacy concerns around the use of generative AI today could be just the tip of a forming iceberg, security researcher Maria Markstedter (aka Azeria) argued during her opening speech at the Black Hat USA conference on August 9, 2023.
While machine learning models have been in our lives for years, with Siri, autocorrect features or recommendation algorithms, large language models (LLMs) are the first “to really blow our minds.”
“After OpenAI went ahead and released their ChatGPT, these tools went from helping the 1% to the remaining 99%, and everyone jumped in,” she continued.
“People started experimenting with AI agents to take autonomous actions. A flood of business use cases is being developed around LLMs and other generative AI models while lacking safety and security guardrails.”
AI Chatbots Are Like “Troubled Teenagers”
However, Azeria compared LLMs of today with “troubled teenagers”: they lie and invent facts without the blink of an eye.
As the security risks and limitations of LLMs and generative AI models became widely known, many organizations grew skeptical, not least because they are black boxes about which their creators reveal very little, Azeria told the Black Hat audience. Some of them even started banning the use of generative AI within the corporate environment.
Microsoft, she argued, was the main protagonist in this story, having invested its first billion dollars in OpenAI as early as 2019, then another billion in 2021. “Today, Microsoft has reportedly invested $13b in generative AI. It has also integrated OpenAI’s GPT into its product and is now pushing new products based on GPT-4. Really, Microsoft is greatly responsible for the mess we’re in right now.”
Faced with current and future risks, the response of the generative AI developers is very contrasted, Azeria continued. On one hand, OpenAI’s Sam Altman expressed his concerns about the dangers of his technology while opening it to everyone; on the other, Microsoft CEO Satya Nadella said AI was going to move fast and the company’s chief economist, Michael Schwarz, suggested that regulators “should wait until we see [any] harm before we regulate [generative AI models].”
Meanwhile, governments take different stances, with the EU and Canada developing AI legislation, the UK adopting a “pro-innovation approach,” and the US working hand-in-hand with AI forms to push self-regulation.
From External Chatbots to Integrated ML-as-a-Service Platforms
However, a lot is happening while we find acceptable answers for mitigating the risks of generative AI, Azeria insisted. “Every business wants to be an AI business in some form or shape right now.”
After ChatGPT and other GPT-based tools appeared, OpenAI introduced plugins with GPT-4, which opened a new door to enterprise-focused use cases of LLMs.
However, many skeptics of the black-box nature of private LLMs preferred working on integrating GPT-like models within their own infrastructure – and sometimes feeding the models with their own data.
Read more: What the OWASP Top 10 for LLMs Means for the Future of AI Security
“That’s why there is now an entire market of machine learning-as-a-service (MLaaS) platforms making model training and deployment easier and more accessible and affordable for businesses,” Azeria said.
Once again, Microsoft is one of the most prominent actors with Azure OpenAI Service – but others are starting to fight back, like Amazon Web Services (AWS) and its newly launched AWS Bedrock.
Multi-Modal and Autonomous Agents
However, while these new value chain models will probably help boost the adoption of LLMs in business use cases, “what organizations really want is not a text-based chatbot. They want autonomous agents giving them access to a super-smart workforce that can work all hours of the day without needing a salary – and preferably capable of understanding not just text, but graphs, pictures and videos as well.”
That’s why experimental tools have started appearing, such as BabyAGI, AutoGPT, AgentGPT and AdeptAI’s ACT–1. Azelia even believes this market will take off over the following months.
“To achieve this vision of using multi-modal, autonomous agents for business use cases, however, organizations will have to grant them access to a multitude of data and first-party applications, which means that the notion of identity access management has to be re-evaluated, as well as how we assess data security,” Azeria said.
She concluded that organizations’ threat model “will eventually be turned upside down.”
Credit: Source link
Comments are closed.