INTERPRETING CLASS CONDITIONAL GANS WITH CHANNEL AWARENESS

Abstract

Understanding the mechanism of generative adversarial networks (GANs) helps us better use GANs for downstream applications. Existing efforts mainly target interpreting unconditional models, leaving it less explored how a conditional GAN learns to render images regarding various categories. This work fills in this gap by investigating how a class conditional generator unifies the synthesis of multiple classes. For this purpose, we dive into the widely used class-conditional batch normalization (CCBN), and observe that each feature channel is activated at varying degrees given different categorical embeddings. To describe such a phenomenon, we propose channel awareness, which quantitatively characterizes how a single channel contributes to the final synthesis. Extensive evaluations and analyses on the BigGAN model pre-trained on ImageNet reveal that only a subset of channels is primarily responsible for the generation of a particular category, similar categories (e.g., cat and dog) usually get related to some same channels, and some channels turn out to share information across all classes. For good measure, our algorithm enables several novel applications with conditional GANs. Concretely, we achieve (1) versatile image editing via simply altering a single channel and manage to (2) harmoniously hybridize two different classes. We further verify that the proposed channel awareness shows promising potential in (3) segmenting the synthesized image and (4) evaluating the category-wise synthesis performance. Code will be made publicly available.

1. INTRODUCTION

The past few years have witnessed the rapid advancement of generative adversarial networks (GANs) in image synthesis (Karras et al., 2021; Brock et al., 2019) . Despite the wide range of applications powered by GANs, like image-to-image translation (Isola et al., 2017 ), superresolution (Chan et al., 2021; Menon et al., 2020) , and image editing (Ling et al., 2021) , it typically requires learning a separate model for a new task, which can be time and resources consuming. Some recent studies have confirmed that a well-trained GAN model naturally supports various downstream applications, benefiting from the rich knowledge learned in the training process (Bau et al., 2019; Shen et al., 2020) . Therefore, to make sufficient use of a GAN, it becomes crucial to explore and further exploit its internal knowledge. Many attempts have been made to understand the generation mechanism of GANs. It is revealed that, to produce a fair synthesis, the generator is required to render multi-level semantics, such as the overall attributes (e.g., the gender of a face image) (Shen et al., 2020) , the objects inside (e.g., the bed in a bedroom image) (Bau et al., 2019; Yang et al., 2020) , the part-whole organization (e.g., the segmentation of the synthesis) (Zhang et al., 2021) , etc. However, existing efforts mainly focus on interpreting unconditional GANs, leaving conditional generation as a black box. Compared with unconditional models, a class conditional model is more informative and efficient in that it unifies the synthesis of multiple categories, like animals, vehicles, and scenes (Brock et al., 2019) . Figuring out how it manages the class information owns much great potential yet rarely explored. To fill in this gap, we take a close look at the popular class-conditional batch normalization (CCBN) (Brock et al., 2019) , which is one of the core modules distinguishing conditional generators from unconditional ones. Concretely, CCBN learns category-specific parameters to scale and shift the input features, such that the output features developed with different class embeddings can be

