A TRAINABLE OPTIMAL TRANSPORT EMBEDDING FOR FEATURE AGGREGATION AND ITS RELATIONSHIP TO ATTENTION

Abstract

We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data. To address this challenging task, we introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference. Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost. Our aggregation technique admits two useful interpretations: it may be seen as a mechanism related to attention layers in neural networks, or it may be seen as a scalable surrogate of a classical optimal transport-based kernel. We experimentally demonstrate the effectiveness of our approach on biological sequences, achieving state-of-the-art results for protein fold recognition and detection of chromatin profiles tasks, and, as a proof of concept, we show promising results for processing natural language sequences. We provide an open-source implementation of our embedding that can be used alone or as a module in larger learning models at https://github.com/claying/OTK.

1. INTRODUCTION

Many scientific fields such as bioinformatics or natural language processing (NLP) require processing sets of features with positional information (biological sequences, or sentences represented by a set of local features). These objects are delicate to manipulate due to varying lengths and potentially long-range dependencies between their elements. For many tasks, the difficulty is even greater since the sets can be arbitrarily large, or only provided with few labels, or both. Deep learning architectures specifically designed for sets have recently been proposed (Lee et al., 2019; Skianis et al., 2020) . Our experiments show that these architectures perform well for NLP tasks, but achieve mixed performance for long biological sequences of varying size with few labeled data. Some of these models use attention (Bahdanau et al., 2015) , a classical mechanism for aggregating features. Its typical implementation is the transformer (Vaswani et al., 2017) , which has shown to achieve state-of-the-art results for many sequence modeling tasks, e.g, in NLP (Devlin et al., 2019) or in bioinformatics (Rives et al., 2019) , when trained with self supervision on large-scale data. Beyond sequence modeling, we are interested in this paper in finding a good representation for sets of features of potentially diverse sizes, with or without positional information, when the amount of training data may be scarce. To this end, we introduce a trainable embedding, which can operate directly on the feature set or be combined with existing deep approaches. More precisely, our embedding marries ideas from optimal transport (OT) theory (Peyré & Cuturi, 2019) and kernel methods (Schölkopf & Smola, 2001) . We call this embedding OTKE (Optimal

