Generative AI tools have conquered the workplace, especially large language model-based (LLM) chatbots like OpenAI’s ChatGPT and Google’s Bard.
These powerful tools are capable of performing a broad range of tasks, from helping to draft perfect emails to providing digestible summaries, freeing up the time-strapped worker to focus on more strategic activities. However, using LLMs in the workplace is not without risk. A good example is when Samsung banned the use of ChatGPT for its staff in May 2023 after some of its employees accidentally leaked sensitive data via the chatbot.
Read more about privacy concerns around LLMs and generative AI
On July 20, 2023, Plurilock, a Canadian cybersecurity provider, launched a product that aims to prevent sensitive data from inadvertently being sent to such AI platforms.
This new solution, called PromptGuard, is an AI-driven cloud access security broker (CASB) that supports employee AI use while ensuring that sensitive data is not released to AI systems.
PromptGuard relies on a combination of mature data loss prevention technology and new Plurilock AI platform technology to enable users to interact with generative AI such that the AI platform does not receive sensitive data through user-generated AI prompts.
Plurilock’s technology anonymizes the prompts so that it does not hinder the user’s experience with the LLM chatbot.
It was developed as part of the company’s focus on generative AI safety and was built using Plurilock’s new CASB technology for AI, which is the subject of a US provisional patent filing previously announced on July 18, 2023.
It is available as part of the Plurilock AI platform under the company’s early access program as a closed, invitation-only beta experience.
Credit: Source link
Comments are closed.