Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Deep learning models have demonstrated their potential in learning effective molecular representations critical for drug property prediction and drug discovery. Despite significant advancements in leveraging multimodal drug molecule semantics, existing approaches often struggle with challenges such as low-quality data and structural complexity. Large language models (LLMs) excel in generating high-quality molecular representations due to their robust characterization capabilities. In this work, we introduce GICL, a cross-modal contrastive learning framework that integrates LLM-derived embeddings with molecular image representations. Specifically, LLMs extract feature representations from the SMILES strings of drug molecules, which are then contrasted with graphical representations of molecular images to achieve a holistic understanding of molecular features. Experimental results demonstrate that GICL achieves state-of-the-art performance on the ADMET task while offering interpretable insights into drug properties, thereby facilitating more efficient drug design and discovery.

Download full-text PDF

Source
http://dx.doi.org/10.1021/acs.jcim.5c00895DOI Listing

Publication Analysis

Top Keywords

gicl cross-modal
8
drug property
8
property prediction
8
large language
8
language models
8
molecular representations
8
drug
7
molecular
5
representations
5
cross-modal drug
4

Similar Publications

Deep learning models have demonstrated their potential in learning effective molecular representations critical for drug property prediction and drug discovery. Despite significant advancements in leveraging multimodal drug molecule semantics, existing approaches often struggle with challenges such as low-quality data and structural complexity. Large language models (LLMs) excel in generating high-quality molecular representations due to their robust characterization capabilities.

View Article and Find Full Text PDF