DISENTANGLING 3D PROTOTYPICAL NETWORKS FOR FEW-SHOT CONCEPT LEARNING

Abstract

We present neural architectures that disentangle RGB-D images into objects' shapes and styles and a map of the background scene, and explore their applications for few-shot 3D object detection and few-shot concept classification. Our networks incorporate architectural biases that reflect the image formation process, 3D geometry of the world scene, and shape-style interplay. They are trained end-to-end self-supervised by predicting views in static scenes, alongside a small number of 3D object boxes. Objects and scenes are represented in terms of 3D feature grids in the bottleneck of the network. We show that the proposed 3D neural representations are compositional: they can generate novel 3D scene feature maps by mixing object shapes and styles, resizing and adding the resulting object 3D feature maps over background scene feature maps. We show that classifiers for object categories, color, materials, and spatial relationships trained over the disentangled 3D feature sub-spaces generalize better with dramatically fewer examples than the current state-of-the-art, and enable a visual question answering system that uses them as its modules to generalize one-shot to novel objects in the scene.

1. INTRODUCTION

Humans can learn new concepts from just one or a few samples. Consider the example in Figure 1 . Assuming there is a person who has no prior knowledge about blue and carrot, by showing this person an image of a blue carrot and telling him "this is an carrot with blue color", the person can easily generalize from this example to (1) recognizing carrots of varying colors, 3D poses and viewing conditions and under novel background scenes, (2) recognizing the color blue on different objects, (3) combine these two concepts with other concepts to form a novel object coloring he/she has never seen before, e.g., red carrot or blue tomato and (4) using the newly learned concepts to answer questions regarding the visual scene. Motivated by this, we explore computational models that can achieve these four types of generalization for visual concept learning. We propose disentangling 3D prototypical networks (D3DP-Nets), a model that learns to disentangle RGB-D images into objects, their 3D locations, sizes, 3D shapes and styles, and the background scene, as shown in Figure 2 . Our model can learn to detect objects from a few 3D object bounding box annotations and can further disentangle objects into different attributes through a self-supervised view prediction task. Specifically, D3DP-Nets uses differentiable unprojection and rendering operations to go back and forth between the input RGB-D (2.5D) image and a 3D scene feature map. From the scene feature map, our model learns to detect objects and disentangles each object into a 3D shape code and an 1D style code through a shape/style disentangling antoencoder. We use adaptive instance normalization layers (Huang & Belongie, 2017) to encourage shape/style disentanglement within each object. Our key intuition is to represent objects and their shapes in terms of 3D feature representations disentangled from style variability so that the model can correspond objects with similar shape by explicitly rotating and scaling their 3D shape representations during matching. We test D3DP-Nets in few-shot concept learning, visual question answering (VQA) and scene generation. We train concept classifiers for object shapes, object colors/materials, and spatial relationships on our inferred disentangled feature spaces, and show they outperform current stateof-the-art (Mao et al., 2019; Hu et al., 2016) , which use 2D representations. We show that a VQA modular network that incorporates our concept classifiers shows improved generalization over the state-of-the-art (Mao et al., 2019) with dramatically fewer examples. Last, we empirically show that D3DP-Nets generalize their view predictions to scenes with novel number, category and styles of objects, and compare against state-of-the-art view predictive architectures of Eslami et al. (2018) . The main contribution of this paper is to identify the importance of using disentangled 3D feature representations for few-shot concept learning. We show the disentangled 3D feature representations can be learned using self-supervised view prediction, and they are useful for detecting and classifying language concepts by training them over the relevant only feature subsets. The proposed model outperforms the current state-of-the-art in VQA in the low data regime and the proposed 3D disentangled representation outperforms similar 2D or 2.5D ones in few-shot concept classification.

2. RELATION TO PREVIOUS WORKS

Few-shot concept learning Few-shot learning methods attempt to learn a new concept from one or a few annotated examples at test time, yet, at training time, these models still require labelled datasets which annotate a group of images as "belonging to the same category" (Koch et al., 2015; Vinyals et al., 2016b) . Metric-based few-shot learning approaches (Snell et al., 2017; Qi et al., 2018; Schwartz et al., 2018; Vinyals et al., 2016a) aim at learning an embedding space in which objects of the same category are closer in the latent space than objects that belong to different categories. These models needs to be trained with several (annotated) image collections, where each collection contains images of the same object category. Works of Misra et al. ( 2017 2019) compose attribute and nouns to detect novel attribute-noun combinations, but their feature extractors need to be pretrained on large annotated image collections, such as Imagenet, or require annotated data with various attribute compositions. The proposed



Figure 1: Given a single image-language example regarding new concepts (e.g., blue and carrot), our model can parse the object into its shape and style codes and ground them with Blue and Carrot labels, respectively. On the right, we show tasks the proposed model can achieve using this grounding.(a) It can detect the object under novel style, novel pose, and in novel scene arrangements and viewpoints. (b) It can detect a new concept like blue broccoli. (c) It can imagine scenes with the new concepts. (d) It can answer complex questions about the scene.

); Purushwalkam et al. (2019); Nikolaus et al. (2019); Tokmakov et al. (

availability

https://mihirp1998.github.io/project_pages/d3dp/ 

