IMPROVING OBJECT-CENTRIC LEARNING WITH QUERY OPTIMIZATION

Abstract

The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning.

1. INTRODUCTION

Objects, and their interactions, are the foundations of human cognition (Spelke & Kinzler, 2007) . The endowment on making abstractions from perception and organizing them systematically empowers humans the ability to accomplish and generalize across a broad range of tasks, such as scene modeling (Bear et al., 2020 ), visual reasoning (Yi et al., 2020) , and simulating interactions (Bear et al., 2020) . The key to such success lies in the emergence of symbol-like mental representations of object concepts (Whitehead, 1928) . However, important as it is, disentangling object-centric concepts from visual stimuli is an exceedingly difficult task to accomplish with limited supervision (Greff et al., 2020) and requires proper inductive biases (Schölkopf et al., 2021) . Motivated by the development of symbolic thought in human cognition, slot-based representations, instance (Greff et al., 2017; 2019; Locatello et al., 2020) , sequential (Gregor et al., 2015; Burgess et al., 2019; Engelcke et al., 2021; Goyal et al., 2021) , or spatial (Crawford & Pineau, 2019; Lin et al., 2020; Jiang et al., 2019) , have been the key inductive bias to recent advances in unsupervised object-centric learning. Among them, the Slot-Attention module has received tremendous focus given its simple yet effective design (Locatello et al., 2020) . By leveraging the iterative attention mechanism, Slot-Attention learns to compete between slots for explaining parts of the input, exhibiting a softclustering effect on visual signals. It is later proven to be more memory and training efficient as a plug-and-play module for unsupervised object-centric learning (Locatello et al., 2020) and fostered powerful variants in understanding images (Singh et al., 2021; Xu et al., 2022 ), 3D scenes (Yu et al., 2022; Sajjadi et al., 2022a) and videos (Kipf et al., 2022; Elsayed et al., 2022; Singh et al., 2022) . However, as revealed by recent studies, the Slot-Attention module comes with innate discrepancies for object-centric representation learning. First, with slots randomly initialized each time, the objectcentric representations obtained by these models do not necessarily bind to object concepts (Kipf et al., 2022) . Intuitively, such randomness leads to undesired scenarios where slots with similar ˚Equal contribution. : Work done during internship at BIGAI. initializations compete for objects on different images. Such randomness challenges the iterative refinement procedure as it now needs to project sets of potentially similar representations to independent constituents of the input. As discovered by Chang et al. (2022) , differentiating through such recurrences contributes to various training instabilities with growing spectral norm of Slot-Attention weights. This leads to the second and perhaps least desired property of Slot-Attention; it relies heavily on hyper-parameter tuning, including gradient clipping, learning rate warm-up, etc., and further hurts the flexibility of Slot-Attention in adapting to broader applications with more complex signals. To this end, we propose an extension of the Slot-Attention module, Bi-level Optimized Query Slot Attention (BO-QSA), to tackle the aforementioned problems. First, we follow the bi-level optimization framework proposed by Chang et al. (2022) for easing the training difficulty in Slot-Attention. More importantly, instead of sampling from a learnable Gaussian distribution, we propose to directly learn the slot initializations as queries. With these learnable representations, we eliminate the ambiguous competitions between slots and provide a better chance for them to bind to specific object concepts. We improve the training of query-initialized Slot-Attention with a straight-through gradient estimator (STE) by connecting our method with first-order approaches (Finn et al., 2017; Nichol & Schulman, 2018; Geng et al., 2021) in solving bi-level optimization problems. The experimental results show that the proposed BO-QSA can achieve state-of-the-art results on both synthetic and real-world image datasets with simple code adjustments to the original Slot-Attention module. With our model significantly outperforming previous methods in both synthetic and real domains, we provide thorough ablative studies demonstrating the effectiveness of our model design. We later show that our BO-QSA possesses the potential of binding object concepts to slots. To validate this potential, we design zero-shot transfer learning experiments to show the generalization power of our model on unsupervised object-centric learning. As the experiments suggest (see Sec. 5), our model could potentially be a principle approach for unsupervised object-centric learning and serve as a general plug-and-play module for a broader range of modalities where variants of Slot-Attention prosper. We hope these efforts can help foster new insights in the field of object-centric learning. Contributions In summary, our main contributions are three-fold: • We propose BO-QSA, a query-initialized Slot-Attention model that unites straight-through gradient updates to learnable queries with methods on improving Slot-Attention with bi-level optimization. • We show that, with simple code adjustments on Slot-Attention, the proposed BO-QSA achieves state-of-the-art results on several challenging synthetic and real-world image benchmarks, outperforming previous methods by a large margin. • We show the potential of our BO-QSA being a better approach to concept binding and learning generalizable representations with qualitative results and zero-shot transfer learning experiments. The slots are initialized from a learnable Gaussian distribution with mean µ and variance σ. They are refined iteratively within the Slot-Attention module by passing the updates into a Gated Recurrent Unit (GRU) (Cho et al., 2014) and MLP parameterized by ϕ update for T iterations:

2. PRELIMINARIES

s pt`1q " h ϕ update ps ptq , sptq q, s 0 " N pµ, diagpσqq, ŝ " s pT q . (1) The final prediction ŝ can be treated as the learned object-centric representation w.r.t. to input features x. In the image domain, we take as input a set of images I and encode them with f ϕ enc to obtain



Locatello et al., 2020)  takes a set of N input feature vectors x P R N ˆDinput and maps them to a set of K output vectors (i.e., slots) s P R KˆDslots . It leverages an iterative attention mechanism to first map inputs and slots to the same dimension D with linear transformations kp¨q, qp¨q and vp¨q parameterized by ϕ attn . At each iteration, the slots compete to explain part of the visual input by computing the attention matrix A with softmax function over slots and updating slots with the weighted average of visual values:s " f ϕ attn ps, xq "

