HYPERPARAMETER TRANSFER ACROSS DEVELOPER ADJUSTMENTS

Abstract

After developer adjustments to a machine learning (ML) algorithm, how can the results of an old hyperparameter optimization (HPO) automatically be used to speedup a new HPO? This question poses a challenging problem, as developer adjustments can change which hyperparameter settings perform well, or even the hyperparameter search space itself. While many approaches exist that leverage knowledge obtained on previous tasks, so far, knowledge from previous development steps remains entirely untapped. In this work, we remedy this situation and propose a new research framework: hyperparameter transfer across adjustments (HT-AA). To lay a solid foundation for this research framework, we provide four simple HT-AA baseline algorithms and eight benchmarks changing various aspects of ML algorithms, their hyperparameter search spaces, and the neural architectures used. The best baseline, on average and depending on the budgets for the old and new HPO, reaches a given performance 1.2-3.6x faster than a prominent HPO algorithm without transfer. As HPO is a crucial step in ML development but requires extensive computational resources, this speedup would lead to faster development cycles, lower costs, and reduced environmental impacts. To make these benefits available to ML developers off-the-shelf and to facilitate future research on HT-AA, we provide python packages for our baselines and benchmarks. Graphical Abstract: Hyperparameter optimization (HPO) across adjustments to the algorithm or hyperparameter search space. A common practice is to perform HPO from scratch after each adjustment or to somehow manually transfer knowledge. In contrast, we propose a new research framework about automatic knowledge transfers across adjustments for HPO.

1. INTRODUCTION: A NEW HYPERPARAMETER TRANSFER FRAMEWORK

The machine learning (ML) community arrived at the current generation of ML algorithms by performing many iterative adjustments. Likely, the way to artificial general intelligence requires many more adjustments. Each algorithm adjustment could change which settings of the algorithm's hyperparameters perform well, or even the hyperparameter search space itself (Chen et al., 2018; Li et al., 2020) . For example, when deep learning developers change the optimizer, the learning rate's optimal value likely changes, and the new optimizer may also introduce new hyperparameters. Since ML algorithms are known to be very sensitive to their hyperparameters (Chen et al., 2018; Feurer & Hutter, 2019) , developers are faced with the question of how to adjust their hyperparameters after changing their code. Assuming that the developers have results of one or several hyperparameter optimizations (HPOs) that were performed before the adjustments, they have two options: 1. Somehow manually transfer knowledge from old HPOs. This is the option chosen by many researchers and developers, explicitly disclosed, e.g., in the seminal work on AlphaGo (Chen et al., 2018) . However, this is not a satisfying option since manual decision making is time-consuming, often individually designed, and has already lead to reproducibility problems (Musgrave et al., 2020) .

2.. Start the new HPO from scratch.

Leaving previous knowledge unutilized can lead to higher computational demands and worse performance (demonstrated empirically in Section 5). This is especially bad as the energy consumption of ML algorithms is already recognized as an environmental problem. For example, deep learning pipelines can have CO 2 emissions on the order of magnitude of the emissions of multiple cars for a lifetime (Strubell et al., 2019) , and their energy demands are growing furiously: Schwartz et al. ( 2019) cite a "300,000x increase from 2012 to 2018". Therefore, reducing the number of evaluated hyperparameter settings should be a general goal of the community. The main contribution of this work is the introduction of a new research framework: Hyperparameter transfer across adjustments (HT-AA), which empowers developers with a third option: 3. Automatically transfer knowledge from previous HPOs. This option leads to advantages in two aspects: The automation of decision making and the utilization of previous knowledge. On the one hand, the automation allows to benchmark strategies, replaces expensive manual decision making, and enables reproducible and comparable experiments; on the other hand, utilizing previous knowledge leads to faster development cycles, lower costs, and reduced environmental impacts. To lay a solid foundation for the new HT-AA framework, our individual contributions are as follows: • We formally introduce a basic version of the HT-AA problem (Section 2). • We provide four simple baseline algorithms for our basic HT-AA problem (Section 3). • We provide a comprehensive set of eight novel benchmarks for our basic HT-AA problem (Section 4). • We show the advantage of transferring across developer adjustments: some of our simple baseline algorithms outperform HPO from scratch up to 1.2-3.6x on average depending on the budgets (Section 5). • We empirically demonstrate the need for well-vetted algorithms for HT-AA: two baselines modelled after actually-practiced manual strategies perform horribly on our benchmarks (Section 5). • We relate the HT-AA framework to existing research efforts and discuss the research opportunities it opens up (Section 6). • To facilitate future research on HT-AA, we provide open-source code for our experiments and benchmarks and provide a python package with an out-of-the-box usable implementation of our HT-AA algorithms.

2. HYPERPARAMETER TRANSFER ACROSS ADJUSTMENTS

After presenting a broad introduction to the topic, we now provide a detailed description of hyperparameter transfer across developer adjustments (HT-AA). We first introduce hyperparameter optimization, then discuss the types of developer adjustments, and finally describe the transfer across these adjustments.

