Large Language Models Surprise Meta AI Researchers at Compiler Optimization!

“We thought this would be a paper about the obvious failings of LLMs that would serve as motivation for future clever ideas to overcome those failings. We were entirely taken by surprise to find that in many cases a sufficiently trained LLM can not only predict the best optimizations to apply to an input code, but it can also directly perform the optimizations without resorting to the compiler at all!”.   - Researchers at Meta AI

Meta AI Researchers were trying to make Large Language Models (LLMs) do the same kind of code optimizations that regular compilers, like LLVM, do. LLVM’s optimizer is incredibly complex, with thousands of rules and algorithms written in over 1 million lines of code in the C++ programming language.

They didn’t think LLMs could handle this complexity because they are typically used for tasks like translating languages and generating code. Compiler optimizations involve a lot of different types of thinking, maths, and using complex techniques, which they didn’t think LLMs were good at. But post methodology the results were absolutely surprising. 

The above image demonstrates the overview of the methodology, showing the model input (Prompt) and output (Answer) during training and inference. The prompt contains unoptimized code. The answer contains an optimization pass list, instruction counts, and the optimized code. During inference, only the optimization pass list is generated, which is then fed into the compiler, ensuring that the optimized code is correct.

Their approach is straightforward, starting with a 7-billion-parameter Large Language Model (LLM) architecture sourced from LLaMa 2 [25] and initializing it from scratch. The model is then trained on a vast dataset consisting of millions of LLVM assembly examples, each paired with the best compiler options determined through a search process for each assembly, as well as the resulting assembly code after applying those optimizations. Through these examples alone, the model acquires the ability to optimize code with remarkable precision.

The notable contribution of their work lies in being the first to apply LLMs to the task of code optimization. They create LLMs specifically tailored for compiler optimization, demonstrating that these models achieve a 3.0% improvement in code size reduction on a single compilation compared to a search-based approach that attains 5.0% improvement with 2.5 billion compilations. In contrast, state-of-the-art machine learning approaches lead to regressions and require thousands of compilations. The researchers also include supplementary experiments and code examples to provide a more comprehensive understanding of the potential and limitations of LLMs in code reasoning. Overall, they find the efficacy of LLMs in this context to be remarkable and believe that their findings will be of interest to the broader community.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Janhavi Lande, is an Engineering Physics graduate from IIT Guwahati, class of 2023. She is an upcoming data scientist and has been working in the world of ml/ai research for the past two years. She is most fascinated by this ever changing world and its constant demand of humans to keep up with it. In her pastime she enjoys traveling, reading and writing poems.


🚀 The end of project management by humans (Sponsored)

Credit: Source link

Comments are closed.