GENERATING UNSEEN COMPLEX SCENES: ARE WE THERE YET?

Abstract

Although recent complex scene conditional generation models generate increasingly appealing scenes, it is very hard to assess which models perform better and why. This is often due to models being trained to fit different data splits, and defining their own experimental setups. In this paper, we propose a methodology to compare complex scene conditional generation models, and provide an in-depth analysis that assesses the ability of each model to (1) fit the training distribution and hence perform well on seen conditionings, (2) to generalize to unseen conditionings composed of seen object combinations, and (3) generalize to unseen conditionings composed of unseen object combinations. As a result, we observe that recent methods are able to generate recognizable scenes given seen conditionings, and exploit compositionality to generalize to unseen conditionings with seen object combinations. However, all methods suffer from noticeable image quality degradation when asked to generate images from conditionings composed of unseen object combinations. Moreover, through our analysis, we identify the advantages of different pipeline components, and find that (1) encouraging compositionality through instance-wise spatial conditioning normalizations increases robustness to both types of unseen conditionings, (2) using semantically aware losses such as the scene-graph perceptual similarity helps improve some dimensions of the generation process, and (3) enhancing the quality of generated masks and the quality of the individual objects are crucial steps to improve robustness to both types of unseen conditionings.

1. INTRODUCTION

The recent years have witnessed significant advances in generative models (Goodfellow et al., 2014; Kingma & Welling, 2014; van den Oord et al., 2016a; Miyato & Koyama, 2018; Miyato et al., 2018; Brock et al., 2019) , enabling their increasingly widespread use in many application domains (van den Oord et al., 2016b; Vondrick et al., 2016; Zhang et al., 2018a; Hong et al., 2018; Sun & Wu, 2020) . Among the most promising approaches, Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have achieved remarkable results, generating high quality, high resolution samples in the context of single class conditional image generation (Brock et al., 2019) . This outstanding progress has paved the road towards tackling more challenging tasks such as the one of complex scene conditional generation, where the goal is to generate high quality images with multiple objects and their interactions from a given conditioning (e.g. bounding box layout, segmentation mask, or scene graph). Given the exploding number of possible object combinations and their layouts, the requested conditionings oftentimes require zero-shot generalization. Therefore, successfully generating high quality, high resolution, diverse samples from complex scene datasets such as COCO-Stuff (Caesar et al., 2018) remains a stretch goal. Despite recent efforts producing increasingly appealing complex scene samples (Hong et al., 2018; Hinz et al., 2019; Park et al., 2019; Ashual & Wolf, 2019; Sun & Wu, 2020; Sylvain et al., 2020) , and as previously noted in the unconditional GAN literature (Lucic et al., 2018; Kurach et al., 2018) , it is unfortunately very hard to assess which models perform better, and perhaps more importantly why. In the case of conditional complex scene generation, this is often due to models being trained to fit different data splits, using different conditioning modalities and levels of supervision -bounding box layouts, segmentation masks, scene graphs -, and reporting inconsistent quantitative metrics (e.g. repeatedly computing previous methods' results using different reference distributions, and/or

