Researchers Use Machine Learning To Create A Novel Discrete Variational Autoencoder For Automatically Improving Code Efficiency
Writing computationally efficient code is a skill everyone has wanted to master since they first wrote Hello World on their computers. Thankfully, Researchers at Google and Georgia Tech have found a way to make the life of all coders (especially software developers) easier through their machine learning model designed for programmers to identify multiple options for a code that has higher computational efficiency. So now, the program will produce output faster than the original code, which is a step forward in automating the process. This model contains a framework that applies multiple categorical transformations in a single program using a novel discrete variational(ever-changing) encoder. The language used here to study the code by the developers is python since they only want to explore the complexities of the code, and the dataset of coding problems used is taken from Google Code Jam international competitive programming competition. These solutions make the produced code more efficient than the developer code.
The goal is to create an algorithm that can help developers understand multiple variants of code that are more computationally efficient. The algorithm’s success is based on three conditions and the first is that the hints provided, while not perfect, should be syntactically similar and logically understandable. The second condition states that the model should be more computationally efficient, for which the researchers relied on the textual similarity between the code by the model and the code in the database with higher efficiency. At the same time, the third condition states that for an ideal code, the model should generate as many different variations as possible of the original code. The process of creating efficient code happens in 2 steps. In the first step, a transformer is trained in baseline optimized code provided so that the model can provide high levels of natural language hints. The hints are aimed to be much higher than the previous versions of the model. Through the dataset, it was seen that there are multiple discrete categories to improve code. Thus the proposal was to use a discrete variational(ever-changing) autoencoder to learn these categories in an unsupervised way. The researchers also wanted to edit the code so that each type of learned edit that must make in the code represents a local or conceptual change.
All the models are implemented in Jax, and the data is trained with the learning rate of .01. All the models are trained with the batch size of 16 for 100 epochs using a parallel training method on 64 Google TPU cores and 16 host machines. After the process of training the data and then giving the hints as an output from the model, when we analyze the results, we find that the edits that are described in the above paragraph are often responsible for a syntactical change. We can take different transformations on the same program by varying the latent code. If the varying code has learned to apply the changes, it is seen that the code that was slow in providing the results is now providing faster results.
In conclusion, this model is a concrete step toward automating code optimizations which will save a lot of time and person-hours related to code optimization. As the future moves forward to Machine learning models that can write code, it is essential to focus on code optimization since the slower code contributes to the cost but also the carbon footprint of the company. Through this model, we are one step closer to reducing the carbon footprint of computers a bit.
This Article is written as a research summary article by Marktechpost Staff based on the Preprint research paper 'LEARNING TO IMPROVE CODE EFFICIENCY'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article. Please Don't Forget To Join Our ML Subreddit
A Machine Learning enthusiast who loves to research and get to know new and latest technologies like AlphaFold, DeepMind AlphaZero etc. that are the best AI in their respective fields and I am very excited what the future of AI and how we will implement it in our daily life
Credit: Source link
Comments are closed.