AN EFFICIENT ENCODER-DECODER ARCHITECTURE WITH TOP-DOWN ATTENTION FOR SPEECH SEPARA-TION

ABSTRACT

Deep neural networks have shown excellent prospects in speech separation tasks. However, obtaining good results while keeping a low model complexity remains challenging in real-world applications. In this paper, we provide a bio-inspired efficient encoder-decoder architecture by mimicking the brain's top-down attention, called TDANet, with decreased model complexity without sacrificing performance. The top-down attention in TDANet is extracted by the global attention (GA) module and the cascaded local attention (LA) layers. The GA module takes multi-scale acoustic features as input to extract global attention signal, which then modulates features of different scales by direct top-down connections. The LA layers use features of adjacent layers as input to extract the local attention signal, which is used to modulate the lateral input in a top-down manner. On three benchmark datasets, TDANet consistently achieved competitive separation performance to previous state-of-the-art (SOTA) methods with higher efficiency. Specifically, TDANet's multiply-accumulate operations (MACs) are only 5% of Sepformer, one of the previous SOTA models, and CPU inference time is only 10% of Sepformer. In addition, a large-size version of TDANet obtained SOTA results on three datasets, with MACs still only 10% of Sepformer and the CPU inference time only 24% of Sepformer. Our study suggests that top-down attention can be a more efficient strategy for speech separation.

1. INTRODUCTION

In cocktail parties, people's communications are inevitably disturbed by various sounds (Bronkhorst, 2015; Cherry, 1953) , such as environmental noise and extraneous audio signals, potentially affecting the quality of communication. Humans can effortlessly perceive the speech signal of a target speaker in a cocktail party to improve the accuracy of speech recognition (Haykin & Chen, 2005) . In speech processing field, the corresponding challenge is to separate different speakers' audios from the mixture audio, known as speech separation. Due to rapid development of deep neural networks (DNNs), DNN-based speech separation methods have significantly improved (Luo & Mesgarani, 2019; Luo et al., 2020; Tzinis et al., 2020; Chen et al., 2020; Subakan et al., 2021; Hu et al., 2021; Li & Luo, 2022) . As in natural language processing, the SOTA speech separation methods are now embracing increasingly complex models to achieve better separation performance, such as DPTNet (Chen et al., 2020) and Sepformer (Subakan et al., 2021) . These models typically use multiple transformer layers (Vaswani et al., 2017) to capture longer contextual information, leading to a large number of parameters and high computational

