CURE: A PRE-TRAINING FRAMEWORK ON LARGE-SCALE PATIENT DATA FOR TREATMENT EFFECT ESTIMATION Anonymous

Abstract

Treatment effect estimation (TEE) refers to the estimation of causal effects, and it aims to compare the difference among treatment strategies on important outcomes. Current machine learning based methods are mainly trained on labeled data with specific treatments or outcomes of interest, which can be sub-optimal if the labeled data are limited. In this paper, we propose a novel transformer-based pre-training and fine-tuning framework called CURE for TEE from observational data. CURE is pre-trained on large-scale unlabeled patient data to learn representative contextual patient representations, and then fine-tuned on labeled patient data for TEE. We design a new sequence encoding for longitudinal (or structured) patient data and we incorporate structure and time into patient embeddings. Evaluated on 4 downstream TEE tasks, CURE outperforms the state-of-the-art methods in terms of an average of 3.8% and 6.9% absolute improvement in Area under the ROC Curve (AUC) and Area under the Precision-Recall Curve (AUPR), and 15.7% absolute improvement in Influence function-based Precision of Estimating Heterogeneous Effects (IF-PEHE). We further demonstrate the data scalability of CURE and verify the results with corresponding randomized clinical trials. Our proposed method provides a new machine learning paradigm for TEE based on observational data.

1. INTRODUCTION

Treatment effect estimation (TEE) is to evaluate the causal effects of treatment strategies on some important outcomes, which is a crucial problem in many areas such as healthcare (Glass et al., 2013) , education (Dehejia & Wahba, 1999 ) and economics (Imbens, 2004) . Randomized clinical trials (RCTs) are the de-facto gold standard for identifying causal effects through randomizing the treatment assignment and comparing the responses in different treatment groups. However, conducting RCTs is time-consuming, expensive and sometimes unethical. Observational data such as medical claims provide a promising opportunity for treatment effect estimation when RCTs are expensive or impossible to conduct. Recently, many works have been proposed to adopt neural networks (NNs) for TEE from observational data (Shalit et al., 2017; Shi et al., 2019; Hassanpour & Greiner, 2019; Curth & van der Schaar, 2021b; a; Zhang et al., 2022b; Guo et al., 2021) . Compared to classical TEE methods such as regression trees (Chipman et al., 2010) or random forests (Wager & Athey, 2018) , NN-based methods achieve better performance in handling the complex and nonlinear relationships among covariates, treatment and outcome. However, there are still some common limitations of existing TEE methods: 1) Most model designs are task-specific or data-specific so it is hard to adapt the model to a more generalized setting. 2) Existing labeled dataset often has small-scale data size, whereas training neural models requires large and high-quality labeled data for capturing inherent complex relationships of the input data. Recently, Transformer (Vaswani et al., 2017) has been widely adopted as a critical and unified building block in the pre-training and fine-tuning paradigm across data modalities. The pre-trained Transformer-based models (PTMs) have become the model of choice in many deep learning domains such as natural language processing (NLP) (Devlin et al., 2018; Radford et al., 2018; 2019 ; et al., 2020; Parmar et al., 2018) . The dominant approach is to pre-train on a large-scale dataset with unsupervised or self-supervised learning and then fine-tune on a smaller task-specific dataset. Nonetheless, applying this pre-training and fine-tuning paradigm to treatment effect estimation problems faces the following three major challenges: 1) encoding structured longitudinal observational patient data into sequence input; 2) lack of well-curated large-scale pre-training dataset; 3) lack of real-world downstream treatment effect estimation tasks to benchmark baselines. In this paper, we propose a new pre-training and fine-tuning framework for estimating causal effect of a treatment: CaUsal tReatment Effect estimation (CURE). As shown in Fig. 1 , the large-scale structured patient data are extracted from a real-world medical claims data (MarketScan Research Databasesfoot_0 ). We first encode the structured data as sequential input by chronologically flattening and aligning all observed covariates. We obtain around 3M processed unlabeled patient sequences for pre-training. And the downstream datasets with labeled treatment and outcome are created according to specific TEE tasks from established RCTs. Based on the retrospective study design and domain knowledge, we obtain 4 downstream tasks and each of them containing 10K-20K patient samples. The task is to evaluate the comparative effectiveness of two treatment effects in reducing the risk of stroke for patients with coronary artery disease (CAD). Second, we pre-train a Transformer-based model on the unlabeled data with an unsupervised learning objective to generate contextualized patient representations. To accommodate the issues of complex hierarchical structure (i.e., the patient record contains multiple visits and each visit contains multiple types of medications or diagnoses) and irregularity of the observational patient data, we propose a comprehensive embedding method to incorporate the structure and time information. Finally, we fine-tune the pre-trained model on various downstream TEE tasks.



https://www.ibm.com/products/marketscan-research-databases



Figure1: The overall pipeline of CURE. It mainly consists of three parts: 1) data encoding of longitudinal patient data; 2) unsupervised pre-training on unlabeled data and 3) fine-tuning on downstream labeled data for treatment effect estimation. In TEE, labels mean the studied treatment α and outcome y of the patient sequence x.Brown et al., 2020; Liu et al., 2019)  and computer vision (CV)(Carion et al., 2020; Dosovitskiy  et al., 2020; Parmar et al., 2018). The dominant approach is to pre-train on a large-scale dataset with unsupervised or self-supervised learning and then fine-tune on a smaller task-specific dataset. Nonetheless, applying this pre-training and fine-tuning paradigm to treatment effect estimation problems faces the following three major challenges: 1) encoding structured longitudinal observational patient data into sequence input; 2) lack of well-curated large-scale pre-training dataset; 3) lack of real-world downstream treatment effect estimation tasks to benchmark baselines.

acknowledgement

We are the first study to demonstrate the success of adopting the pre-training and fine-tuning framework to representation learning of patient data for TEE, together with necessary but minimal changes on the transformer architecture design, and real-world case studies on randomized clinical trials. We summarize our main contributions as follows.• We propose CURE, a novel transformer-based pre-training and fine-tuning framework for TEE.We present a new patient data encoding method to encode structured observational patient data and incorporate covariate type and time into patient embeddings. • We obtain and preprocess large-scale patient data from real-world medical claims data as our pretraining resource. We derive 4 downstream TEE tasks according to study designs and domain knowledge from established RCTs for model evaluation. • We conduct thorough experiments and show that CURE yields superior performance on all downstream tasks compared to state-of-the-art TEE methods. We achieve, on average, 3.8% and 6.9% absolute improvement in AUC and AUPR respective for outcome prediction, and 15.7% absolute improvement in IF-PEHE for TEE over the best baseline among 4 tasks. We also verify the estimated treatment effects with the conclusion of corresponding RCTs. • We further explore the effectiveness of CURE in several ablation studies including the proposed patient embedding, the influence of pre-training data size on downstream tasks, and the generalizability of low-resource fine-tuning data.

