MULTI-OBJECTIVE OPTIMIZATION VIA EQUIVARIANT DEEP HYPERVOLUME APPROXIMATION

Abstract

Optimizing multiple competing objectives is a common problem across science and industry. The inherent inextricable trade-off between those objectives leads one to the task of exploring their Pareto front. A meaningful quantity for the purpose of the latter is the hypervolume indicator, which is used in Bayesian Optimization (BO) and Evolutionary Algorithms (EAs). However, the computational complexity for the calculation of the hypervolume scales unfavorably with increasing number of objectives and data points, which restricts its use in those common multi-objective optimization frameworks. To overcome these restrictions, previous work has focused on approximating the hypervolume using deep learning. In this work, we propose a novel deep learning architecture to approximate the hypervolume function, which we call DeepHV. For better sample efficiency and generalization, we exploit the fact that the hypervolume is scale equivariant in each of the objectives as well as permutation invariant w.r.t. both the objectives and the samples, by using a deep neural network that is equivariant w.r.t. the combined group of scalings and permutations. We show through an ablation study that including these symmetries leads to significantly improved model accuracy. We evaluate our method against exact, and approximate hypervolume methods in terms of accuracy, computation time, and generalization. We also apply and compare our methods to state-of-theart multi-objective BO methods and EAs on a range of synthetic and real-world benchmark test cases. The results show that our methods are promising for such multi-objective optimization tasks.

1. INTRODUCTION

Imagine, while listening to a lecture you also quickly want to check out the latest news on your phone, so you can appear informed during lunch. As an experienced listener, who knows what lecture material is important, and an excellent reader, who knows how to scan over the headlines, you are confident in your abilities in each of those tasks. So you continue listening to the lecture, while scrolling through news. Suddenly you realize you need to split focus. You face the unavoidable trade-off between properly listening to the lecture while slowly reading the news, or missing important lecture material while fully processing the news. Since this is not your first rodeo, you learned over time how to transition between these competing objectives while still being optimal under those trade-off constraints. Since you don't want to stop listening to the lecture, you decide to listen as closely as possible, while still making some progress in reading the news. Later, during lunch, you propose an AI that can read the news while listening to a lecture. The question remains of how to train an AI to learn to excel in different, possibly competing, tasks or objectives and make deliberate well-calibrated trade-offs between them, whenever necessary. Simultaneous optimization of multiple, possibly competing, objectives is not just a challenge in our daily routines, it also finds widespread application in many fields of science. For instance, in machine learning (Wu et al., 2019; Snoek et al., 2012 ), engineering (Liao et al., 2007; Oyama et al., 2018) , and chemistry (O'Hagan et al., 2005; Koledina et al., 2019; MacLeod et al., 2022; Boelrijk et al., 2021; 2023; Buglioni et al., 2022) . 1

