META TEMPORAL POINT PROCESSES

Abstract

A temporal point process (TPP) is a stochastic process where its realization is a sequence of discrete events in time. Recent work in TPPs model the process using a neural network in a supervised learning framework, where a training set is a collection of all the sequences. In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs). We introduce context sets to model TPPs as an instantiation of NPs. Motivated by attentive NP, we also introduce local history matching to help learn more informative features. We demonstrate the potential of the proposed method on popular public benchmark datasets and tasks, and compare with state-of-the-art TPP methods.

1. INTRODUCTION

With the advancement of deep learning, there has been growing interest in modeling temporal point processes (TPPs) using neural networks. Although the community has developed many innovations in how neural TPPs encode the history of past events (Biloš et al., 2021) or how they decode these representations into predictions of the next event (Shchur et al., 2020; Lin et al., 2022) , the general training framework for TPPs has been supervised learning where a model is trained on a collection of all the available sequences. However, supervised learning is susceptible to overfitting, especially in high noise environments, and generalization to new tasks can be challenging. In recent years, meta learning has emerged as an alternative to supervised learning as it aims to adapt or generalize well on new tasks, which resembles how humans can learn new skills from a few examples. Inspired by this, we propose to train TPPs in a meta learning framework. To this end, we treat each sequence as a "task", since it is a realization of a stochastic process with its own characteristics. For instance, consider the pickup times of taxis in a city. The dynamics of these event sequences are governed by many factors such as location, weather and the routine of a taxi driver, which implies the pattern of each sequence can be significantly different from each other. Under the supervised learning framework, a trained model tends to capture the patterns seen in training sequences well, but it easily breaks on unseen patterns. As the goal of modeling TPPs is to estimate the true probability distribution of the next event time given the previous event times, we employ Neural Processes (NPs), a family of the model-based meta learning with stochasticity, to explain TPPs. In this work, we formulate neural TPPs as NPs by satisfying some conditions of NPs, and term it as Meta TPP. Inspired by the literature in NP, we further propose the Meta TPP with a cross-attention module, which we refer to as Attentive TPP. We demonstrate the strong potential of the proposed method through extensive experiments. Our contributions can be summarized as follows, • To the best of our knowledge, this is the first work that formulates the TPP problem in a meta learning framework, opening up a new research direction in neural TPPs. • Inspired by the NP literature, we present a conditional meta TPP formulation, followed by a latent path extension, culminating with our proposed Attentive TPP model. 1

