JOINTIST: SIMULTANEOUS IMPROVEMENT OF MULTI-INSTRUMENT TRANSCRIPTION AND MUSIC SOURCE SEPARATION VIA JOINT TRAINING

Abstract

In this paper, we introduce Jointist, an instrument-aware multi-instrument framework that is capable of transcribing, recognizing, and separating multiple musical instruments from an audio clip. Jointist consists of an instrument recognition module that conditions the other two modules: a transcription module that outputs instrument-specific piano rolls, and a source separation module that utilizes instrument information and transcription results. The joint training of the transcription and source separation modules serves to improve the performance of both tasks. The instrument module is optional and can be directly controlled by human users. This makes Jointist a flexible user-controllable framework. Our challenging problem formulation makes the model highly useful in the real world given that modern popular music typically consists of multiple instruments. Its novelty, however, necessitates a new perspective on how to evaluate such a model. In our experiments, we assess the proposed model from various aspects, providing a new evaluation perspective for multi-instrument transcription. Our subjective listening study shows that Jointist achieves state-of-the-art performance on popular music, outperforming existing multi-instrument transcription models such as MT3. We conducted experiments on several downstream tasks and found that the proposed method improved transcription by more than 1 percentage points (ppt.), source separation by 5 SDR, downbeat detection by 1.8 ppt., chord recognition by 1.4 ppt., and key estimation by 1.4 ppt., when utilizing transcription results obtained from Jointist.

1. INTRODUCTION

Transcription, or automatic music transcription (AMT), is a music analysis task that aims to represent audio recordings using symbolic notations such as scores or MIDI (Musical Instrument Digital Interface) files (Benetos et al., 2013; 2018; Piszczalski & Galler, 1977) . AMT can play an important role in music information retrieval (MIR) systems since symbolic information -e.g., pitch, duration, and velocity of notes -determines a large part of our musical perception. A successful AMT should provide a denoised version of music in a musically-meaningful, symbolic format, which could ease the difficulty of many MIR tasks such as melody extraction (Ozcan et al., 2005) , chord recognition (Wu & Li, 2018), beat tracking (Vogl et al., 2017 ), composer classification (Kong et al., 2020a) , (Kim et al., 2020) , and emotion classification (Chou et al., 2021) . Finally, high-quality AMT systems can be used to build large-scale datasets as done by Kong et al. (2020b) . This can, in turn, accelerate the development of neural network-based MIR systems as these are often trained using otherwise scarcely available audio aligned symbolic data (Brunner et al., 2018; Wu et al., 2020a; Hawthorne et al., 2018) . Currently, the only available pop music dataset is the Slakh2100 dataset by Manilow et al. (2019) . The lack of large-scale audio aligned symbolic dataset for pop music impedes the development of other MIR systems that are trained using symbolic music representations. In the early research on AMTs, the problem is often defined narrowly as transcription of a single target instrument, typically piano (Klapuri & Eronen, 1998) or drums (Paulus & Klapuri, 2003) , and whereby the input audio only includes that instrument. The limitation of this strong and thenunavoidable assumption is clear: the model would not work for modern pop music, which occupies a majority of the music that people listen to. In other words, to handle realistic use-cases of AMT, it 1

