IMPROVING OBJECT-CENTRIC LEARNING WITH QUERY OPTIMIZATION

Abstract

The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning.

1. INTRODUCTION

Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007) . The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020 ), visual reasoning (Yi et al., 2020) , and simulating interactions (Bear et al., 2020) . The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928) . However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021) . Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020) , sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021) , or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019) , have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020) . By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022 ), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022) . However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022) . Intuitively, such randomness leads to undesired scenarios where slots with similar ˚Equal contribution. : Work done during internship at BIGAI.

