TABDDPM: MODELLING TABULAR DATA WITH DIFFUSION MODELS

Abstract

Denoising diffusion probabilistic models are currently becoming the leading paradigm of generative modeling for many important data modalities. Being the most prevalent in the computer vision community, diffusion models have also recently gained some attention in other domains, including speech, NLP, and graph-like data. In this work, we investigate if the framework of diffusion models can be advantageous for general tabular problems, where datapoints are typically represented by vectors of heterogeneous features. The inherent heterogeneity of tabular data makes it quite challenging for accurate modeling, since the individual features can be of completely different nature, i.e., some of them can be continuous and some of them can be discrete. To address such data types, we introduce TabDDPM -a diffusion model that can be universally applied to any tabular dataset and handles any type of feature. We extensively evaluate TabDDPM on a wide set of benchmarks and demonstrate its superiority over existing GAN/VAE alternatives, which is consistent with the advantage of diffusion models in other fields. Additionally, we show that TabDDPM is eligible for privacy-oriented setups, where the original datapoints cannot be publicly shared.

1. INTRODUCTION

Denoising diffusion probabilistic models (DDPM) (Sohl-Dickstein et al., 2015; Ho et al., 2020) have recently become an object of great research interest in the generative modelling community since they often outperform the alternative approaches both in terms of the realism of individual samples and their diversity (Dhariwal & Nichol, 2021) . The most impressive successes of DDPM were demonstrated in the domain of natural images (Dhariwal & Nichol, 2021; Saharia et al., 2022; Rombach et al., 2022) , where the advantages of diffusion models are successfully exploited in applications, such as colorization (Song et al., 2021 ), inpainting (Song et al., 2021) , segmentation Baranchuk et al. (2021 ), super-resolution (Saharia et al., 2021; Li et al., 2021) , semantic editing (Meng et al., 2021) and others. Beyond computer vision, the DDPM framework is also investigated in other fields, such as NLP (Austin et al., 2021; Li et al., 2022) , waveform signal processing (Kong et al., 2020; Chen et al., 2020 ), molecular graphs (Jing et al., 2022; Hoogeboom et al., 2022) , time series (Tashiro et al., 2021) , testifying the universality of diffusion models across a wide range of problems. The aim of our work is to understand if the universality of DDPM can be extended to the case of general tabular problems, which are ubiquitous in various industrial applications that include data described by a set of heterogeneous features. For many such applications, the demand for highquality generative models is especially acute because of the modern privacy regulations, like GDPR, which prevent publishing real user data, while the synthetic data produced by generative models can be shared. Training a high-quality model of tabular data, however, can be more challenging compared to computer vision or NLP due to the heterogeneity of individual features and relatively small sizes of typical tabular datasets. In our paper, we show that despite these two intricacies, the diffusion models can successfully approximate typical distributions of tabular data, leading to state-of-the-art performance on most of the benchmarks. In more detail, the main contributions of our work are the following: 1. We introduce TabDDPM -the simplest design of DDPM for tabular problems that can be applied to any tabular task and can work with mixed data, which includes both numerical and categorical features. 2. We demonstrate that TabDDPM outperforms the alternative approaches designed for tabular data, including GAN-based and VAE-based models from the literature, and illustrate the sources of this advantage for several datasets. 3. We show that data produced by TabDDPM appears to be a "sweet spot" for privacyconcerned scenarios when synthetics are used to substitute the real user data that cannot be shared. The source code of TabDDPM is publicly availablefoot_0 . Generative models for tabular problems are currently an active research direction in the machine learning community (Xu et al., 2019; Engelmann & Lessmann, 2021; Jordon et al., 2018; Fan et al., 2020; Torfi et al., 2022; Zhao et al., 2021; Kim et al., 2021; Zhang et al., 2021; Nock & Guillame-Bert, 2022; Wen et al., 2022) since high-quality synthetic data is of large demand for many tabular tasks. First, the tabular datasets are often limited in size, unlike in vision or NLP problems, for which huge "extra" data is available on the Internet. Second, proper synthetic datasets do not contain actual user data, therefore they are not subject to GDPR-like regulations and can be publicly shared without violation of anonymity. The recent works have developed a large number of models, including tabular VAEs (Xu et al., 2019) and GAN-based approaches (Xu et al., 2019; Engelmann & Lessmann, 2021; Jordon et al., 2018; Fan et al., 2020; Torfi et al., 2022; Zhao et al., 2021; Kim et al., 2021; Zhang et al., 2021; Nock & Guillame-Bert, 2022; Wen et al., 2022) . By extensive evaluations on a large number of public benchmarks, we show that our TabDDPM model surpasses the existing alternatives, often by a large margin.

2. RELATED WORK

"Shallow" synthetics generation. Unlike unstructured images or natural texts, tabular data is typically structured, i.e., the individual features are often interpretable and it is not clear if their modelling requires several layers of "deep" architectures. Therefore, the simple interpolation techniques, like SMOTE (Chawla et al., 2002) (originally proposed to address class-imbalance) can serve as simple and powerful solutions as demonstrated in (Camino et al., 2020) , where SMOTE is shown to outperform tabular GANs for minor class oversampling. In the experiments, we demonstrate the advantage of synthetics produced by TabDDPM over synthetics produced by interpolation techniques from the privacy-preserving perspective.

3. BACKGROUND

Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are likelihood-based generative models that handle the data through forward and reverse Markov processes. The forward process q (x 1:T |x 0 ) = T t=1 q (x t |x t-1 ) gradually adds noise to an initial sample x 0 from the data distribution q (x 0 ) sampling noise from the predefined distributions q (x t |x t-1 ) with variances {β 1 , ..., β T }.



URL



Diffusion models(Sohl-Dickstein et al., 2015; Ho et al., 2020)  are a paradigm of generative modelling that aims to approximate the target distribution by the endpoint of the Markov chain, which starts from a given parametric distribution, typically a standard Gaussian. Each Markov step is performed by a deep neural network that effectively learns to invert the diffusion process with a known Gaussian kernel. Ho et al. demonstrated the equivalence of diffusion models and score matching(Song & Ermon, 2019; 2020), showing them to be two different perspectives on the gradual conversion of a simple known distribution into a target distribution via the iterative denoising process.

