LEARNING PROBABILISTIC TOPOLOGICAL REPRESEN-TATIONS USING DISCRETE MORSE THEORY

Abstract

Accurate delineation of fine-scale structures is a very important yet challenging problem. Existing methods use topological information as an additional training loss, but are ultimately making pixel-wise predictions. In this paper, we propose a novel deep learning based method to learn topological/structural 1 representations. We use discrete Morse theory and persistent homology to construct a one-parameter family of structures as the topological/structural representation space. Furthermore, we learn a probabilistic model that can perform inference tasks in such a topological/structural representation space. Our method generates true structures rather than pixel-maps, leading to better topological integrity in automatic segmentation tasks. It also facilitates semi-automatic interactive annotation/proofreading via the sampling of structures and structure-aware uncertainty.

1. INTRODUCTION

Accurate segmentation of fine-scale structures, e.g., vessels, neurons and membranes is crucial for downstream analysis. In recent years, topology-inspired losses have been proposed to improve structural accuracy (Hu et al., 2019; 2021; Shit et al., 2021; Mosinska et al., 2018; Clough et al., 2020) . These losses identify topologically critical locations at which a segmentation network is error-prone, and force the network to improve its prediction at these critical locations. However, these loss-based methods are still not ideal. They are based on a standard segmentation network, and thus only learn pixel-wise feature representations. This causes several issues. First, a standard segmentation network makes pixel-wise predictions. Thus, at the inference stage, topological errors, e.g. broken connections, can still happen, even though they may be mitigated by the topologyinspired losses. Another issue is in uncertainty estimation, i.e., estimating how certain a segmentation network is at different locations. Uncertainty maps can direct the focus of human annotators for efficient proofreading. However, for fine-scale structures, existing pixel-wise uncertainty maps are not effective. As shown in Fig. 1(d) , every pixel adjacent to a vessel branch is highly uncertain, in spite of whether the branch is salient or not. What is more desirable is a structural uncertainty map that can highlight uncertain branches (e.g., Fig. 1 

(f)).

To fundamentally address these issues, we propose to directly model and reason about the structures. In this paper, we propose a novel deep learning based method that directly learns the topological/structural representation of images. To move from pixel space to structure space, we apply the classic discrete Morse theory (Milnor, 1963; Forman, 2002) to decompose an image into a Morse complex, consisting of structural elements like branches, patches, etc. These structural elements are the hypothetical structures one can infer from the input image. Their combinations constitute a space of structures arising from the input image. See Fig. 2(c ) for an illustration. For further reasoning with structures, we propose to learn a probabilistic model over the structure space. The challenge is that the space consists of exponentially many branches and is thus of very high dimension. To reduce the learning burden, we introduce the theory of persistent homology (Sousbie, 2011; Delgado-Friedrichs et al., 2015; Wang et al., 2015) for structure pruning. Each branch has its own persistence measuring its relative saliency. By continuously thresholding the complete Morse complex in terms of persistence, we obtain a sequence of Morse complexes parameterized by the persistence threshold, ϵ. See Fig. 2(d) . By learning a Gaussian over ϵ, we learn a parametric probabilistic model over these structures.

