LLMWare Launches SLIMs: Small Specialized Function-Calling Models for Multi-Step Automation

As enterprises look to deploy LLMs in more complex production use cases beyond simple knowledge assistants, there is a growing recognition of three interconnected needs:  

  • Agents – complex workflows involve multiple steps and require the orchestration of multiple LLM calls;
  • Function Calls – models need to be able to generate structured output that can be handled programmatically, including key tasks such as classification and clustering, which often times are the connective tissue in such workflows; and 
  • Private Cloud – models and data pipelines need to be finetuned and tightly integrated with sensitive existing enterprise processes and data stores.  

LLMWare is setting out to uniquely address all three of these challenges with the launch of its 1B parameter small language models called SLIMs (Structured Language Instruction Models) and a new set of capabilities in the LLMWare library to execute multi-model, multi-step agent workflows in private cloud.

SLIMs join existing small, specialized model families from LLMWare – DRAGON, BLING, and Industry–BERT — along with the LLMWare development framework, to create a comprehensive set of open-source models and data pipelines to address a wide range of complex enterprise RAG use cases.

Classification SLMs with Programmatic Outputs   

SLIMs are small, specialized models designed for natural language classification functions, and have been trained to produce programmatic outputs like Python dictionaries, JSON and SQL, rather than conventional text outputs. 

There are 10 SLIM models being released: Sentiment, NER (Named Entity Recognition), Topic, Ratings, Emotions, Entities, SQL, Category, NLI (Natural Language Inference), and Intent.   

SLIMs are designed to supplement general-purpose LLMs in a complex enterprise workflow.  By being built on a decoder LLM architecture, SLIMs benefit from the innovation curve in foundation LLM models, with the first SLIM launch focusing specifically on a wide range of classification activities. The larger vision for SLIM models is to span even more specialized functions and parameters in the future.  

SLIMs have several attractive features for enterprise deployment:

  • Reimagines traditional ‘hard-coded’ bespoke classifiers for the Gen AI era – and for seamless integration into LLM-based processes;
  • Designed around a common training methodology for fine-tuning and adaptation, allowing the ability to easily combine, stack and fine-tune these models for specific use cases; and
  • Run multi-step workflows without a GPU with quantized versions of each SLIM model to create agents, load multiple SLIM models and use quantized state-of-the-art question-answering DRAGON LLMs.

Extends LLMWare’s Leadership in Small, Specialized Models

According to CEO Darren Oberst, “One of the major inhibitors to unlocking many enterprise use cases with LLMs is the ability to transform LLM outputs into decision points that can be handled programmatically.  Chat models have been optimized for fluency and conversation – which tend to be lengthy and hard to handle in a programmatic ‘if…then’ step. What we hear consistently from our enterprise customers is the need for classification functions and programmatic evaluation of text to reduce to a singular set of values and multi-step processes. This allows for a series of LLM outputs that can be used to arrive at decision points in the process. We believe that SLIMs are the missing piece in this equation.”

With the launch of the SLIM models, the LLMWare ecosystem is one of the most comprehensive open-source development frameworks for enterprise-focused LLM workflows:

  • 40+ open source small specialized models optimized for different tasks, including the DRAGON and BLING models optimized for highly accurate fact-based question-answering and Industry-BERT embedding models fine-tuned by industry; and
  • End-to-end data pipeline that combines high-speed, high-quality parsing and integration with leading persistent data stores, such as MongoDB, Postgres, SQLite, and leading vector stores, such as Milvus, PG Vector, Redis, Qdrant and FAISS.

The latest innovation by LLMWare is poised to propel LLM automation in the enterprise and marks a significant leap forward in the intersection of small language models and enterprise systems.

For more information, please see the llmware GitHub repository at www.github.com/llmware-ai/llmware.git.
For direct access to the models, please see the llmware Huggingface organization page at www.huggingface.co/llmware.



Thanks to AI Bloks for the thought leadership/ Educational article. AI Bloks has supported us in this content/article.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.


🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others…

Credit: Source link

Comments are closed.