QUANTIZED COMPRESSED SENSING WITH SCORE-BASED GENERATIVE MODELS

Abstract

We consider the general problem of recovering a high-dimensional signal from noisy quantized measurements. Quantization, especially coarse quantization such as 1-bit sign measurements, leads to severe information loss and thus a good prior knowledge of the unknown signal is helpful for accurate recovery. Motivated by the power of score-based generative models (SGM, also known as diffusion models) in capturing the rich structure of natural signals beyond simple sparsity, we propose an unsupervised data-driven approach called quantized compressed sensing with SGM (QCS-SGM), where the prior distribution is modeled by a pre-trained SGM. To perform posterior sampling, an annealed pseudo-likelihood score called noise perturbed pseudo-likelihood score is introduced and combined with the prior score of SGM. The proposed QCS-SGM applies to an arbitrary number of quantization bits. Experiments on a variety of baseline datasets demonstrate that the proposed QCS-SGM significantly outperforms existing state-of-the-art algorithms by a large margin for both in-distribution and out-of-distribution samples. Moreover, as a posterior sampling method, QCS-SGM can be easily used to obtain confidence intervals or uncertainty estimates of the reconstructed results.

1. INTRODUCTION

Many problems in science and engineering such as signal processing, computer vision, machine learning, and statistics can be cast as linear inverse problems: y = Ax + n, where A ∈ R M ×N is a known linear mixing matrix, n ∼ N (n; 0, σ 2 I) is an i.i.d. additive Gaussian noise, and the goal is to recover the unknown signal x ∈ R N ×1 from the noisy linear measurements y ∈ R M ×1 . Among various applications, compressed sensing (CS) provides a highly efficient paradigm that makes it possible to recover a high-dimensional signal from a far smaller number of M ≪ N measurements (Candès & Wakin, 2008) . The underlying wisdom of CS is to leverage the intrinsic structure of the unknown signal x to aid the recovery. One of the most widely used structures is sparsity, i.e., most elements of x are zero under certain transform domains, e.g., wavelet and Fourier transformation (Candès & Wakin, 2008) . In other words, the standard CS exploits the fact that many natural signals in real-world applications are (approximately) sparse. This direction has spurred a hugely active field of research during the last two decades, including efficient algorithm design (Tibshirani, 1996; Beck & Teboulle, 2009; Tropp & Wright, 2010; Kabashima, 2003; Donoho et al., 2009 ), theoretical analysis (Candès et al., 2006; Donoho, 2006; Kabashima et al., 2009; Bach et al., 2012) , as well as all kinds of applications (Lustig et al., 2007; 2008) , just to name a few. Despite their remarkable success, traditional CS is still limited by the achievable rates since the sparsity assumptions, whether being naive sparsity or block sparsity (Duarte & Eldar, 2011) , are still too simple to capture the complex and rich structure of natural signals. et al., 2021a; b; Kawar et al., 2021; 2022; Chung et al., 2022) and perform quite well in recovering x with only a few linear measurements in (1). Nevertheless, the linear model (1) ideally assumes that the measurements have infinite precision, which is not the case in realistic acquisition scenarios. In practice, the obtained measurements have to be quantized to a finite number of Q bits before transmission and/or storage (Zymnis et al., 2009; Dai & Milenkovic, 2011) . Quantization leads to information loss which makes the recovery particularly challenging. For moderate and high quantization resolutions, i.e., Q is large, the quantization impact is usually modeled as a mere additive Gaussian noise whose variance is determined by quantization distortion (Dai & Milenkovic, 2011; Jacques et al., 2010) . Subsequently, most CS algorithms originally designed for the linear model (1) can then be applied with some modifications. However, such an approach is apparently suboptimal since the information about the quantizer is not utilized to a full extent (Dai & Milenkovic, 2011) . This is true especially in the case of coarse quantization, i.e., Q is small. An extreme and important case of coarse quantization is 1-bit quantization, where Q = 1 and only the sign of the measurements are observed (Boufounos & Baraniuk, 2008) . Apart from extremely low cost in storage, 1-bit quantization is particularly appealing in hardware implementations and has also proven robust to both nonlinear distortions and dynamic range issues (Boufounos & Baraniuk, 2008) . Consequently, there have been extensive studies on quantized CS, particularly 1-bit CS, in the past decades and a variety of algorithms have been proposed, e.g., (Zymnis et al., 2009; Dai & Milenkovic, 2011; Plan & Vershynin, 2012; 2013; Jacques et al., 2013; Xu & Kabashima, 2013; Xu et al., 2014; Awasthi et al., 2016; Meng et al., 2018; Jung et al., 2021; Liu et al., 2020; Liu & Liu, 2022) . However, most existing methods are based on standard CS methods, which, therefore, inevitably inherit their inability to capture rich structures of natural signals beyond sparsity. While several recent works (Liu et al., 2020; Liu & Liu, 2022 ) studied 1-bit CS using generative priors, their main focuses are VAE and/or GAN, rather than SGM.



Figure 1: Reconstructed images of our QCS-SGM for one FFHQ 256 × 256 high-resolution RGB test image (N = 256 × 256 × 3 = 196608 pixels) from noisy heavily quantized (1bit, 2-bit and 3-bit) CS 8× measurements y = Q(Ax + n), i.e., M = 24576 ≪ N . The measurement matrix A ∈ R M ×N is i.i.d. Gaussian, i.e., Aij ∼ N (0, 1 M ), and a Gaussian noise n is added with standard deviation σ = 10 -3 .

For example, most natural signals are not strictly sparse even in the specified transform domains, and thus simply relying on sparsity alone for reconstruction could lead to inaccurate results. Indeed, researchers have proposed to combine sparsity with additional structure assumptions, such as low-rank assumption(Fazel et al.,  2008; Foygel & Mackey, 2014), total variation (Candès et al., 2006; Tang et al., 2009), to further improve the reconstruction performances. Nevertheless, these hand-crafted priors can only apply, if not exactly, to a very limited range of signals and are difficult to generalize to other cases. To address this problem, driven by the success of generative models(Goodfellow et al., 2014; Kingma  & Welling, 2013; Rezende & Mohamed, 2015), there has been a surge of interest in developing CS

availability

The code is available at https://github.com/mengxiangming/QCS

