CURE: A PRE-TRAINING FRAMEWORK ON LARGE-SCALE PATIENT DATA FOR TREATMENT EFFECT ESTIMATION Anonymous

Abstract

Treatment effect estimation (TEE) refers to the estimation of causal effects, and it aims to compare the difference among treatment strategies on important outcomes. Current machine learning based methods are mainly trained on labeled data with specific treatments or outcomes of interest, which can be sub-optimal if the labeled data are limited. In this paper, we propose a novel transformer-based pre-training and fine-tuning framework called CURE for TEE from observational data. CURE is pre-trained on large-scale unlabeled patient data to learn representative contextual patient representations, and then fine-tuned on labeled patient data for TEE. We design a new sequence encoding for longitudinal (or structured) patient data and we incorporate structure and time into patient embeddings. Evaluated on 4 downstream TEE tasks, CURE outperforms the state-of-the-art methods in terms of an average of 3.8% and 6.9% absolute improvement in Area under the ROC Curve (AUC) and Area under the Precision-Recall Curve (AUPR), and 15.7% absolute improvement in Influence function-based Precision of Estimating Heterogeneous Effects (IF-PEHE). We further demonstrate the data scalability of CURE and verify the results with corresponding randomized clinical trials. Our proposed method provides a new machine learning paradigm for TEE based on observational data.

1. INTRODUCTION

Treatment effect estimation (TEE) is to evaluate the causal effects of treatment strategies on some important outcomes, which is a crucial problem in many areas such as healthcare (Glass et al., 2013 ), education (Dehejia & Wahba, 1999) and economics (Imbens, 2004) . Randomized clinical trials (RCTs) are the de-facto gold standard for identifying causal effects through randomizing the treatment assignment and comparing the responses in different treatment groups. However, conducting RCTs is time-consuming, expensive and sometimes unethical. Observational data such as medical claims provide a promising opportunity for treatment effect estimation when RCTs are expensive or impossible to conduct. Recently, many works have been proposed to adopt neural networks (NNs) for TEE from observational data (Shalit et al., 2017; Shi et al., 2019; Hassanpour & Greiner, 2019; Curth & van der Schaar, 2021b; a; Zhang et al., 2022b; Guo et al., 2021) . Compared to classical TEE methods such as regression trees (Chipman et al., 2010) or random forests (Wager & Athey, 2018), NN-based methods achieve better performance in handling the complex and nonlinear relationships among covariates, treatment and outcome. However, there are still some common limitations of existing TEE methods: 1) Most model designs are task-specific or data-specific so it is hard to adapt the model to a more generalized setting. 2) Existing labeled dataset often has small-scale data size, whereas training neural models requires large and high-quality labeled data for capturing inherent complex relationships of the input data. Recently, Transformer (Vaswani et al., 2017) has been widely adopted as a critical and unified building block in the pre-training and fine-tuning paradigm across data modalities. The pre-trained Transformer-based models (PTMs) have become the model of choice in many deep learning domains such as natural language processing (NLP) (Devlin et al., 2018; Radford et al., 2018; 2019; 1 

