DIFFERENTIABLE GAUSSIANIZATION LAYERS FOR INVERSE PROBLEMS REGULARIZED BY DEEP GENERA-TIVE MODELS

Abstract

Deep generative models such as GANs, normalizing flows, and diffusion models are powerful regularizers for inverse problems. They exhibit great potential for helping reduce ill-posedness and attain high-quality results. However, the latent tensors of such deep generative models can fall out of the desired high-dimensional standard Gaussian distribution during inversion, particularly in the presence of data noise and inaccurate forward models, leading to low-fidelity solutions. To address this issue, we propose to reparameterize and Gaussianize the latent tensors using novel differentiable data-dependent layers wherein custom operators are defined by solving optimization problems. These proposed layers constrain inverse problems to obtain high-fidelity in-distribution solutions. We validate our technique on three inversion tasks: compressive-sensing MRI, image deblurring, and eikonal tomography (a nonlinear PDE-constrained inverse problem) using two representative deep generative models: StyleGAN2 and Glow. Our approach achieves state-of-the-art performance in terms of accuracy and consistency.

1. INTRODUCTION

Inverse problems play a crucial role in many scientific fields and everyday applications. For example, astrophysicists use radio electromagnetic data to image galaxies and black holes (Högbom, 1974; Akiyama et al., 2019) . Geoscientists rely on seismic recordings to reveal the internal structures of Earth (Tarantola, 1984; Tromp et al., 2005; Virieux & Operto, 2009) . Biomedical engineers and doctors use X-ray projections, ultrasound measurements, and magnetic resonance data to reconstruct images of human tissues and organs (Lauterbur, 1973; Gemmeke & Ruiter, 2007; Lustig et al., 2007) . Therefore, developing effective solutions for inverse problems is of great importance in advancing scientific endeavors and improving our daily lives. Solving an inverse problem starts with the definition of a forward mapping from parameters m to data d, which we formally write as d = f (m) + , where f stands for a forward model that usually describes some physical process, denotes noise, d the observed data, and m the parameters to be estimated. The forward model can be either linear or nonlinear and either explicit or implicitly defined by solving partial differential equations (PDEs). This study considers three representative inverse problems: Compressive Sensing MRI, Deblurring, and Eikonal (traveltime) Tomography, which have important applications in medical science, geoscience, and astronomy. The details of each problem and its forward model are in App. A. The forward problem maps m to d, while the inverse problem estimates m given d. Unfortunately, inverse problems are generally under-determined with infinitely many compatible solutions and intrinsically ill-posed because of the nature of the physical system. Worse still, the observed data are usually noisy, and the assumed forward model might be inaccurate, exacerbating the ill-posedness. These challenges require using regularization to inject a priori knowledge into inversion processes to obtain plausible and high-fidelity results. Therefore, an inverse problem is usually posed as an optimization problem: arg min m (1/2) d -f (m)



+ R(m),(2)

