The release of Meta’s new open-source Llama 2 has sparked discussions about large language models (LLMs) use cases. However, accessing and running Llama 2 on local hardware remains a significant barrier for many. To address this issue and democratize access to Llama 2’s power, Meta has partnered with Qualcomm to optimize the model for on-device use, leveraging Qualcomm’s AI-enabled Snapdragon chips.
The collaboration between Meta and Qualcomm aims to enable Llama 2 implementations on-device, leveraging the capabilities of the new AI-enabled Snapdragon chips. By running the model on-device, developers can reduce cloud computing costs and provide users with enhanced privacy, as no data is transmitted to external servers. On-device AI processing also allows for generative AI without an internet connection and enables personalization of models to users’ preferences.
Qualcomm’s Hexagon processor equips its Snapdragon chips with various AI capabilities, including micro tile inferencing, tensor cores, and dedicated processing for SegNet, scalar, and vector workloads. The integration of Llama 2 into the Qualcomm AI Stack further optimizes running AI models on-device.
Meta has learned from the leak of the first LLaMA model, which was initially available only to researchers and academic institutions. However, the leak on the Internet led to an explosion of open-source LLM innovation, resulting in various improved versions of LLaMA. The open-source community contributed significantly, creating versions that could be run on a device, thus making LLMs accessible to a broader audience.
In response to the leak, Meta has taken a different approach with Llama 2’s release, embracing openness and collaboration. The partnership with Qualcomm gives the chipmaker insights into the model’s inner workings, enabling them to optimize it for better performance on Snapdragon chips. This collaboration is expected to coincide with Qualcomm’s Snapdragon 8 Gen 3 chip launch in 2024.
The open-source community is also expected to play a crucial role in developing Llama 2. Combining the industry’s momentum towards on-device AI with the open LLM ecosystem, this move is seen as the first of many steps towards fostering a vibrant on-device AI ecosystem.
Experts predict that open LLMs could lead to a new generation of AI-powered content generation, intelligent assistants, productivity applications, and more. The ability to run LLMs natively on-device unlocks numerous possibilities for on-device AI processing and supports the growing trend of AI capabilities at the edge, as demonstrated by Apple’s inclusion of a neural engine in the M1 chip and Microsoft’s Hybrid AI Loop toolkit.
Overall, the partnership between Meta and Qualcomm signifies a significant step towards democratizing access to AI models, opening up exciting opportunities for developers to create AI-powered applications and ushering in a new era of on-device AI ecosystem similar to the app store explosion with iPhones.
Check out the Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 26k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.
edge with data: Actionable market intelligence for global brands, retailers, analysts, and investors. (Sponsored)
Credit: Source link
Comments are closed.