CROSS-DOMAIN FEW-SHOT LEARNING BY REPRESENTATION FUSION

Abstract

In order to quickly adapt to new data, few-shot learning aims at learning from few examples, often by using already acquired knowledge. The new data often differs from the previously seen data due to a domain shift, that is, a change of the inputtarget distribution. While several methods perform well on small domain shifts like new target classes with similar inputs, larger domain shifts are still challenging. Large domain shifts may result in high-level concepts that are not shared between the original and the new domain. However, low-level concepts like edges in images might still be shared and useful. For cross-domain few-shot learning, we suggest representation fusion to unify different abstraction levels of a deep neural network into one representation. We propose Cross-domain Hebbian Ensemble Few-shot learning (CHEF), which achieves representation fusion by an ensemble of Hebbian learners acting on different layers of a deep neural network that was trained on the original domain. On the few-shot datasets miniImagenet and tieredImagenet, where the domain shift is small, CHEF is competitive with state-of-the-art methods. On cross-domain few-shot benchmark challenges with larger domain shifts, CHEF establishes novel state-of-the-art results in all categories. We further apply CHEF on a real-world cross-domain application in drug discovery. We consider a domain shift from bioactive molecules to environmental chemicals and drugs with twelve associated toxicity prediction tasks. On these tasks, that are highly relevant for computational drug discovery, CHEF significantly outperforms all its competitors.

1. INTRODUCTION

Currently, deep learning is criticized because it is data hungry, has limited capacity for transfer, insufficiently integrates prior knowledge, and presumes a largely stable world (Marcus, 2018) . In particular, these problems appear after a domain shift, that is, a change of the input-target distribution. A domain shift forces deep learning models to adapt. The goal is to exploit models that were trained on the typically rich original data for solving tasks from the new domain with much less data. Examples for domain shifts are new users or customers, new products and product lines, new diseases (e.g. adapting from SARS to COVID19), new images from another field (e.g. from cats to dogs or from cats to bicycles), new social behaviors after societal change (e.g. introduction of cell phones, pandemic), self-driving cars in new cities or countries (e.g. from European countries to Arabic countries), and robot manipulation of new objects. Domain shifts are often tackled by meta-learning (Schmidhuber, 1987; Bengio et al., 1990; Hochreiter et al., 2001) , since it exploits already acquired knowledge to adapt to new data. One prominent application of meta-learning dealing with domain shifts is few-shot learning, since, typically, from the new domain much less data is available than from the original domain. Meta-learning methods perform well on small domain shifts like new target classes with similar inputs. However, larger domain shifts are still challenging for current approaches. Large domain shifts lead to inputs, which are considerably different from the original inputs and possess different high-level concepts. Nonetheless, low-level concepts are often still shared between the inputs of the original domain and the inputs of the new domain. For images, such shared low-level concepts can be edges, textures, small shapes, etc. One way of obtaining low level concepts is to train a new deep learning model from scratch, where the new data is merged with the original data. However, although models of the original domain are often available, the original data, which the models were trained on, often are not. This might have several reasons, e.g. the data owner does no longer grant access to the data, General

