UNI-MOL: A UNIVERSAL 3D MOLECULAR REPRESENTATION LEARNING FRAMEWORK

Abstract

Molecular representation learning (MRL) has gained tremendous attention due to its critical role in learning from limited supervised data for applications like drug design. In most MRL methods, molecules are treated as 1D sequential tokens or 2D topology graphs, limiting their ability to incorporate 3D information for downstream tasks and, in particular, making it almost impossible for 3D geometry prediction/generation. In this paper, we propose a universal 3D MRL framework, called Uni-Mol, that significantly enlarges the representation ability and application scope of MRL schemes. Uni-Mol contains two pretrained models with the same SE(3) Transformer architecture: a molecular model pretrained by 209M molecular conformations; a pocket model pretrained by 3M candidate protein pocket data. Besides, Uni-Mol contains several finetuning strategies to apply the pretrained models to various downstream tasks. By properly incorporating 3D information, Uni-Mol outperforms SOTA in 14/15 molecular property prediction tasks. Moreover, Uni-Mol achieves superior performance in 3D spatial tasks, including protein-ligand binding pose prediction, molecular conformation generation, etc. The code, model, and data are made publicly available at

1. INTRODUCTION

Recently, representation learning (or pretraining, self-supervised learning) [1; 2; 3] has been prevailing in many applications, such as BERT [4] and GPT [5; 6; 7] in Natural Language Processing (NLP), ViT [8] in Computer Vision (CV), etc. These applications have a common characteristic: unlabeled data is abundant, while labeled data is limited. As a solution, in a typical representation learning method, one first adopts a pretraining procedure to learn a good representation from large-scale unlabeled data. Then a finetuning scheme is followed to extract more information from limited supervised data. Applications in the field of drug design share the characteristic that calls for representation learning schemes. The chemical space that a drug candidate lies in is vast, while drug-related labeled data is limited. Not surprisingly, compared with traditional molecular fingerprint-based models [9; 10], recent molecular representation learning (MRL) models perform much better in most property prediction tasks [11; 12; 13] . However, to further improve the performance and extend the application scope of existing MRL models, one is faced with a critical issue. From the perspective of life science, the properties of molecules and the effects of drugs are mostly determined by their 3D structures [14; 15] . In most current MRL methods, one starts with representing molecules as 1D sequential strings, such as SMILES [16; 17; 18] and InChI [19; 20; 21], or 2D graphs [22; 11; 23; 12; 24] . This may limit their ability to incorporate 3D information for downstream tasks. In particular, this makes it almost impossible for 3D geometry prediction or generation, such as, e.g., the prediction of proteinligand binding pose [25] . Even though there have been some recent attempts trying to leverage 3D information in MRL [26; 27], the performance is less than optimal, possibly due to the small size of 3D datasets, and 3D positions can not be used as inputs/outputs during finetuning, since they only serve as auxiliary information. In this work, we propose Uni-Mol, to our best knowledge, the first universal 3D molecular pretraining framework, which is derived from large-scale unlabeled data and is able to directly take 3D positions as both inputs and outputs. In particular, Uni-Mol consists of 3 parts. 1) Backbone. A Transformer based model that can effectively capture the input 3D information, and predict 3D positions directly. 2) Pretraining. Two large-scale datasets: a 209M molecular conformation dataset and a 3M candidate protein pocket dataset, for pretraining 2 models on molecules and protein pockets, respectively. And two pretraining tasks: 3D position recovery and masked atom prediction, for effectively learning 3D spatial representation. 3) Finetuning. Several finetuning strategies for various downstream tasks. For example, how to use the pretrained molecular model in molecular property prediction tasks; how to combine the two pretrained models in protein-ligand binding pose prediction. We refer to Fig. 1 for an overall schematic illustration of the Uni-Mol framework, and the details will be described in Sec. 2. To demonstrate the effectiveness of Uni-Mol, we conduct experiments on a series of downstream tasks. In the molecular property prediction tasks, Uni-Mol outperforms SOTA on 14/15 datasets on the MoleculeNet benchmark. In 3D geometric tasks, Uni-Mol also achieves superior performance. For the pose prediction of protein-ligand complexes, Uni-Mol predicts 80.35% binding poses with RMSD <= 2Å, 22.58% relatively better than popular docking methods, and ranks 1st in the docking power test on CASF-2016 [28] benchmark. Regarding molecular conformation generation, Uni-Mol achieves SOTA for both Coverage and Matching metrics on GEOM-QM9 and GEOM-Drugs [29] . Moreover, Uni-Mol can be successfully applied to tasks with very limited data like pocket druggability prediction. To summarize, Uni-Mol made the following contributions: 1) To our best knowledge, Uni-Mol is the first pure 3D molecular pretraining framework that can predict 3D positions, and the first molecular pretraining framework that can be directly used in 3D tasks in the field of drug design. 2) Based on extensive benchmarks, we build a simple and efficient SE(3) Transformer backbonefoot_0 , and an effective 3D pretraining strategy in Uni-Mol. 3) Uni-Mol outperforms SOTA in various downstream tasks. 4) The whole Uni-Mol framework, including code, model, and data, will be made publicly available.

2.1. BACKBONE

In MRL, there are two well-known backbone models, graph neural networks(GNN) [22; 23; 12] and Transformer [24; 11] . With GNN as the backbone model, for efficiency purposes, locally connected graphs are often used to represent molecules. However, the locally connected graph lacks the ability to capture the long-range interactions among atoms. We believe that long-range interactions are important in MRL. Therefore, We choose Transformer as the backbone model in Uni-Mol, as it fully connects the nodes/atoms and thus can learn the possible long-range interactions.



Although the backbone can output SE(3)-equivariant positions, it is based on SE(3)-invariant representations.



Figure 1: Schematic illustration of the Uni-Mol framework.

availability

https://github.com/

