A LAPLACE-INSPIRED DISTRIBUTION ON SO(3) FOR PROBABILISTIC ROTATION ESTIMATION

Abstract

Estimating the 3DoF rotation from a single RGB image is an important yet challenging problem. Probabilistic rotation regression has raised more and more attention with the benefit of expressing uncertainty information along with the prediction. Though modeling noise using Gaussian-resembling Bingham distribution and matrix Fisher distribution is natural, they are shown to be sensitive to outliers for the nature of quadratic punishment to deviations. In this paper, we draw inspiration from multivariate Laplace distribution and propose a novel Rotation Laplace distribution on SO(3). Rotation Laplace distribution is robust to the disturbance of outliers and enforces much gradient to the low-error region, resulting in a better convergence. Our extensive experiments show that our proposed distribution achieves state-of-the-art performance for rotation regression tasks over both probabilistic and non-probabilistic baselines. Our project page is at pkuepic.github.io/RotationLaplace.

1. INTRODUCTION

Incorporating neural networks to perform rotation regression is of great importance in the field of computer vision, computer graphics and robotics (Wang et al., 2019b; Yin et al., 2022; Dong et al., 2021; Breyer et al., 2021) . To close the gap between the SO(3) manifold and the Euclidean space where neural network outputs exist, one popular line of research discovers learning-friendly rotation representations including 6D continuous representation (Zhou et al., 2019) , 9D matrix representation with SVD orthogonalization (Levinson et al., 2020 ), etc. Recently, Chen et al. (2022) focuses on the gradient backpropagating process and replaces the vanilla auto differentiation with a SO(3) manifold-aware gradient layer, which sets the new state-of-the-art in rotation regression tasks. Reasoning about the uncertainty information along with the predicted rotation is also attracting more and more attention, which enables many applications in aerospace (Crassidis & Markley, 2003) , autonomous driving (McAllister et al., 2017) and localization (Fang et al., 2020) . On this front, recent efforts have been developed to model the uncertainty of rotation regression via probabilistic modeling of rotation space. The most commonly used distributions are Bingham distribution (Bingham, 1974) on S 3 for unit quaternions and matrix Fisher distribution (Khatri & Mardia, 1977) on SO(3) for rotation matrices. These two distributions are equivalent to each other (Prentice, 1986) and resemble the Gaussian distribution in Euclidean Space (Bingham, 1974; Khatri & Mardia, 1977) . While modeling noise using Gaussian-like distributions is well-motivated by the Central Limit Theorem, Gaussian distribution is well-known to be sensitive to outliers in the probabilistic regression models (Murphy, 2012) . This is because Gaussian distribution penalizes deviations quadratically, so predictions with larger errors weigh much more heavily with the learning than low-error ones and thus potentially result in suboptimal convergence when a certain amount of outliers exhibit. Unfortunately, in certain rotation regression tasks, we fairly often come across large prediction errors, e.g. 180 • error, due to either the (near) symmetry nature of the objects or severe occlusions (Murphy et al., 2021) . In Fig. 1 (left), using training on single image rotation regression as an example, we show the statistics of predictions after achieving convergence, assuming matrix Fisher distribution (as done in Mohlin et al. (2020) ). The blue histogram shows the population with different prediction errors and the red dots are the impacts of these predictions on learning, evaluated



† He Wang and Baoquan Chen are the corresponding authors ({hewang, baoquan}@pku.edu.cn).

