Stanford Researchers Harness Deep Learning with GLOW and IVES to Transform Molecular Docking and Ligand Binding Pose Prediction

Deep learning has the potential to enhance molecular docking by improving scoring functions. Current sampling protocols often need prior information to generate accurate ligand binding poses, limiting scoring function accuracy. Two new protocols, GLOW and IVES, developed by researchers from Stanford University, address this challenge, demonstrating enhanced pose sampling efficacy. Benchmarking on diverse protein structures, including AlphaFold-generated ones, validates the methods. 

Deep learning in molecular docking often relies on rigid protein docking datasets, neglecting protein flexibility. While flexible docking considers protein flexibility, it tends to be less accurate. GLOW and IVES are advanced sampling protocols addressing these limitations, consistently outperforming baseline methods, particularly in dynamic binding pockets. It holds promise for improving ligand pose sampling in protein-ligand docking, which is crucial for enhancing deep learning-based scoring functions. 

Molecular docking predicts ligand placement in protein binding sites, which is crucial for drug discovery. Conventional methods face challenges in generating accurate ligand poses. Deep learning can enhance accuracy but relies on effective pose sampling. GLOW and IVES improve samples for challenging scenarios, boosting accuracy. Applicable to unliganded or predicted protein structures, including AlphaFold-generated ones, they offer curated datasets and open-source Python code.

GLOW and IVES are two pose sampling protocols for molecular docking. GLOW employs a softened van der Waals potential to generate ligand poses, while IVES enhances accuracy by incorporating multiple protein conformations. Performance comparisons with baseline methods show the superiority of GLOW and IVES. Evaluation of test sets measures correct pose percentages in cross-docking cases. Seed pose quality is vital for efficient IVES, with the Smina docking score and score used for selection. 

GLOW and IVES outperformed baseline methods in accurately sampling ligand poses, excelling in challenging scenarios and AlphaFold benchmarks with significant protein conformational changes. Evaluation of test sets confirmed their superior likelihood of sampling correct postures. IVES, generating multiple protein conformations, offers benefits for geometric deep learning on protein structures, achieving comparable performance to Schrodinger IFD-MD with fewer conformations. Datasets of ligand pose for 5,000 protein-ligand pairs generated by GLOW and IVES are provided, aiding the development and evaluation of deep-learning-based scoring functions in molecular docking.

https://arxiv.org/abs/2312.00191

In conclusion, GLOW and IVES are two powerful pose-sampling methods that have proven more effective than basic techniques, particularly in difficult scenarios and AlphaFold benchmarks. Multiple protein conformations can be generated with IVES, which is highly advantageous for geometric deep learning. Additionally, the datasets provided by GLOW and IVES, containing ligand poses for 5,000 protein-ligand pairs, are invaluable resources for researchers working on deep-learning-based scoring functions in molecular docking.


Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 34k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.


🐝 [FREE AI WEBINAR] ‘Building Multimodal Apps with LlamaIndex – Chat with Text + Image Data’ Dec 18, 2023 10 am PST

Credit: Source link

Comments are closed.