LOOP UNROLLED SHALLOW EQUILIBRIUM REGULAR-IZER (LUSER) -A MEMORY-EFFICIENT INVERSE PROBLEM SOLVER

Abstract

In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements. Classical optimizationbased techniques proceed by optimizing a data consistency metric together with a regularizer. Current state-of-the-art machine learning approaches draw inspiration from such techniques by unrolling the iterative updates for an optimizationbased solver and then learning a regularizer from data. This loop unrolling (LU) method has shown tremendous success, but often requires a deep model for the best performance leading to high memory costs during training. Thus, to address the balance between computation cost and network expressiveness, we propose an LU algorithm with shallow equilibrium regularizers (LUSER). These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training. The proposed method is evaluated on image deblurring, computed tomography (CT), as well as single-coil Magnetic Resonance Imaging (MRI) tasks and shows similar, or even better, performance while requiring up to 8× less computational resources during training when compared against a more typical LU architecture with feedforward convolutional regularizers.

1. INTRODUCTION

In an inverse problems we face the task of reconstructing some data or parameters of an unknown signal from indirect observations. The forward process, or the mapping from the data to observations, is typically well known, but ill-posed or non-invertible. More formally, we consider the task of recovering some underlying signal x from measurements y taken via some forward operator A according to y = Ax + η, (1) where η represents noise. The forward operator can be nonlinear, but to simplify the notation, we illustrate the idea in linear form throughout this paper. A common approach to recover the signal is via an iterative method based on the least squares loss: x = arg min x ∥y -Ax∥ 2 . (2) For many problems of interest, A is ill-posed and does not have full column rank. Thus, attempting to solve (2) does not yield a unique solution. To address this, we can extend (2) by including a regularizing term to bias the inversion towards solutions with favorable properties. Common examples of regularization include ℓ 2 , ℓ 1 , and Total Variation (TV). Each regularizer encourages certain properties on the estimated signal x (e.g., smoothness, sparsity, piece-wise constant, etc.) and is often chosen based on task-specific prior knowledge. Recent works (Ongie et al., 2020) attempt to tackle inverse problems using more data-driven methods. Unlike typical supervised learning tasks that attempt to learn a mapping purely from examples, deep learning for inverse problems have access to the forward operator and thus should be able to guide the learning process for more accurate reconstructions. One popular approach to incorporating knowledge of the forward operator is termed loop unrolling (LU). These methods are heavily inspired by standard iterative inverse problem solvers, but rather than use a hand tuned regularizer, they instead learn the update with some parameterized model. They tend to have a fixed number

