A New AI Research From DeepMind Proposes Two Direction And Structure-Aware Positional Encodings For Directed Graphs
Transformer models have been recently gaining a lot of popularity. These neural network models follow relationships in sequential input, such as the words in a sentence, to learn context and meaning. With the introduction of models like GPT 3.5 and GPT 4, proposed by OpenAI, the field of Artificial Intelligence and, thereby, Deep Learning has really advanced and has been the talk of the town. Competitive programming, conversational question answering, combinatorial optimization issues, and graph learning tasks all incorporate transformers as key components.
Transformers models are used in competitive programming to produce solutions from textual descriptions. The well-known chatbot ChatGPT, which is a GPT-based model and a well-liked conversational question-answering model, is the best example of a transformer model. Transformers have also been used to resolve combinatorial optimization issues like the Travelling Salesman Problem, and they have been successful in graph learning tasks, especially when it comes to predicting the characteristics of molecules.
Transformer models have shown great versatility in modalities, such as images, audio, video, and undirected graphs, but transformers for directed graphs still lack attention. To address this gap, a team of researchers has proposed two direction- and structure-aware positional encodings specifically designed for directed graphs. The Magnetic Laplacian, a direction-aware extension of the Combinatorial Laplacian, provides the foundation for the first positional encoding that has been proposed. The provided eigenvectors capture crucial structural information while taking into consideration the directionality of edges in a graph. The transformer model becomes more cognizant of the directionality of the graph by including these eigenvectors in the positional encoding method, which enables it to successfully represent the semantics and dependencies found in directed graphs.
Directional random walk encodings are the second positional encoding technique that has been suggested. Random walks are a popular method for exploring and analyzing graphs in which the model learns more about the directional structure of a directed graph by taking random walks in the graph and incorporating the walk information into the positional encodings. Given that it aids the model’s comprehension of the links and information flow inside the graph, this knowledge is used in a variety of downstream activities.
The team has shared that the empirical analysis has shown how the direction- and structure-aware positional encodings have performed well in a number of downstream tasks. The correctness testing of sorting networks which is one of these tasks, entails figuring out whether a particular set of operations truly constitutes a sorting network. The suggested model outperforms the previous state-of-the-art method by 14.7%, as measured by the Open Graph Benchmark Code2, by utilizing the directionality information in the graph representation of sorting networks.
The team has summarized the contributions as follows –Â
- A clear connection between sinusoidal positional encodings, commonly used in transformers, and the eigenvectors of the Laplacian has been established.
- The team has proposed spectral positional encodings that extend to directed graphs, providing a way to incorporate directionality information into the positional encodings.
- Random walk positional encodings have been extended to directed graphs, enabling the model to capture the directional structure of the graph.
- The team has evaluated the predictiveness of structure-aware positional encodings for various graph distances, demonstrating their effectiveness. They have introduced the task of predicting the correctness of sorting networks, showcasing the importance of directionality in this application.
- The team has quantified the benefits of representing a sequence of program statements as a directed graph and has proposed a new graph construction method for source code, improving predictive performance and robustness.
- A new state-of-the-art performance on the OGB Code2 dataset has been achieved, specifically for function name prediction, with a 2.85% higher F1 score and a relative improvement of 14.7%.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 26k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
🚀 Check Out 800+ AI Tools in AI Tools Club
Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.
Credit: Source link
Comments are closed.