Stanford AI Researchers Propose ‘FOCUS’: A Foundation Model Which Aims to Achieve Perfect Secrecy For Personal Tasks
Machine learning holds the possibility of assisting people with personal activities. Personal jobs range from well-known activities like subject categorization over personal correspondence and answering open-ended questions in the context of personal relationships to specialized tasks for individual users. Given the sensitive nature of the personal data necessary for such jobs, these systems must ensure that no private information is leaked, that the data is of high quality, and that they are feasible.
The ideal privacy system will provide absolute secrecy – the likelihood that adversaries will learn private information does not rise as users engage with the system. Simply training or fine-tuning a model on a user’s private dataset is a simple approach to satisfy this traditional privacy guarantee. Recent neural models, on the other hand, necessitate a large amount of training data, yet consumers frequently only have a small amount of labeled data.
Federated learning across data spanning numerous users has developed as a prominent method to overcome the problem of individual users lacking sufficient data. Instead of requiring all users to send data to a central location, FL trains a task model by shipping the model between users and a central server.
FL does not allow the transmission of raw data between devices, but it does give up perfect secrecy. Unfortunately, the exposed model can be used to recover confidential information. For the average user, FL increases model performance. Private data, on the other hand, vary greatly by individual, and participant performance is frequently unequal. The training process might also be tainted by antagonistic participants and central servers. FL requires several rounds of communication between multiple users to perform well, introducing common distributed system issues such as device heterogeneity and synchronization. Every personal task that a user wants to complete comes at a cost.
Researchers at Stanford University recently proposed Foundation model Controls for User Secrecy (FOCUS), a framework for securely serving personal tasks based on a unidirectional data flow architecture, in response to these problems. FOCUS includes delivering off-the-shelf public FMs to private user silos and using zero-to-few sample FM adaptation approaches to complete personal tasks with the zero-to-few training examples that users have access to.
Researchers used the Bell-LaPadula model, which guarantees absolute secrecy, to formalize the privacy guarantee. The BLP model was created for government organizations to manage multi-security-level access control, corresponding to the configuration of publicly accessible FMs and privately accessible personal data. On 6 of 7 relevant benchmarks from the privacy literature, spanning vision and natural language, the team found the FM baselines competitive with strong FL baselines using few-sample FM adaptation strategies.
Conclusion
FOCUS implies that perfect concealment for a variety of personal chores may be attainable, notwithstanding a current focus on statistical conceptions of privacy. This is just a proof-of-concept, and there are a number of issues to address in the future, like prompting fragility, out-of-domain degradation, and the sluggish runtime of executing inference with large models. Using FMs comes with a number of drawbacks. FMs have a tendency to have knowledge flashes when they are uncertain, are only available in resource-rich languages, and are expensive to pretrain.
This Article is written as a summary article by Marktechpost Staff based on the paper 'CAN FOUNDATION MODELS HELP US ACHIEVE PERFECT SECRECY?'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper and github. Please Don't Forget To Join Our ML Subreddit
Promote Your Brand 🚀 Marktechpost – An Untapped Resource for Your AI/ML Coverage Needs
Get high-quality leads from a niche tech audience. Benefit from our 1 million+ views and impressions each month. Tap into our audience of data scientists, machine learning researchers, and more.
Credit: Source link
Comments are closed.