Meet Symbolicai: A Machine Learning Framework that Combines Generative Models and Solvers for Logic-Based Approaches
Generative AI has recently seen a boom, with large language models (LLMs) showing broad applicability across many fields. These models have improved the performance of numerous tools, including those that facilitate interactions based on searches, program synthesis, chat, and many more. Also, language-based methods have made it easier to link many modalities, which has led to several transformations, such as text-to-code, text-to-3D, text-to-audio, text-to-image, and video. These uses only begin to illustrate the far-reaching impact of language-based interactions on the future of human-computer interaction.
To address value misalignment and open up new possibilities for interactions between chains, trees, and graphs of thoughts, instruction-based fine-tuning of LLMs through reinforcement learning from human feedback or direct preference optimization has shown encouraging results. Despite their strength in formal linguistic competence, new research shows that LLMs aren’t very good at functional language competence.
Researchers from Johannes Kepler University and the Austrian Academy of Sciences introduce SymbolicAI, a compositional neuro-symbolic (NeSy) framework that can represent and manipulate compositional, multi-modal, and self-referential structures. Through in-context learning, SymbolicAI enhances LLMs’ creative process with functional zero- and few-shot learning operations, paving the way for developing flexible applications. These steps direct the generation process and allow for a modular architecture with many different types of solvers. These include engines that evaluate mathematical expressions using formal language, engines that prove theorems, databases that store knowledge, and search engines that retrieve information.
The researchers aimed to design domain-invariant problem solvers, and they revealed these solvers as building blocks for creating compositional functions as computational graphs. It also helps develop an extendable toolset that combines classical and differentiable programming paradigms. They took inspiration for SymbolicAI’s architecture from previous work on cognitive architectures, the impact of language on the formation of semantic maps in the brain, and the evidence that the human brain has a selective language processing module. They view language as a core processing module that defines a foundation for general AI systems, separate from other cognitive processes like thinking or memory.
Finally, they address the evaluation of multi-step NeSy generating processes by introducing a benchmark, deriving a quality measure, and calculating its empirical score, all in tandem with the framework. Using cutting-edge LLMs as NeSy engine backends, they empirically evaluate and discuss possible application areas. Their evaluation is centered around the GPT family of models, specifically GPT-3.5 Turbo and GPT-4 Turbo because they are the most effective models up to this point; Gemini-Pro because it is the best-performing model available through the Google API; LlaMA 2 13B because it provides a solid foundation for the open-source LLMs from Meta; and Mistral 7B and Zephyr 7B, as good starting points for the revised and fine-tuned open-source contenders, respectively. To assess the models’ logic capabilities, they define mathematical and natural language forms of logical expressions and analyze how well the models can translate and evaluate logical claims across domains. Finally, the team tested how well models can design, build, maintain, and run hierarchical computational graphs.
SymbolicAI lays the groundwork for future studies in areas such as self-referential systems, hierarchical computational graphs, sophisticated program synthesis, and the creation of autonomous agents by integrating probabilistic approaches with AI design. The team strives to foster a culture of collaborative growth and innovation through their commitment to open-source ideas.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our Telegram Channel
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.
Credit: Source link
Comments are closed.