PATCORRECT: NON-AUTOREGRESSIVE PHONEME-AUGMENTED TRANSFORMER FOR ASR ERROR COR-RECTION

Abstract

Speech-to-text errors made by automatic speech recognition (ASR) system negatively impact downstream models relying on ASR transcriptions. Language error correction models as a post-processing text editing approach have been recently developed for refining the source sentences. However, efficient models for correcting errors in ASR transcriptions that meet the low latency requirements of industrial grade production systems have not been well studied. In this work, we propose a novel non-autoregressive (NAR) error correction approach to improve the transcription quality by reducing word error rate (WER) and achieve robust performance across different upstream ASR systems. Our approach augments the text encoding of the Transformer model with a phoneme encoder that embeds pronunciation information. The representations from phoneme encoder and text encoder are combined via multi-modal fusion before feeding into the length tagging predictor for predicting target sequence lengths. The joint encoders also provide inputs to the attention mechanism in the NAR decoder. We experiment on 3 open-source ASR systems with varying speech-to-text transcription quality and their erroneous transcriptions on 2 public English corpus datasets. Results show that our PATCorrect (Phoneme Augmented Transformer for ASR error Correction) consistently outperforms state-of-the-art NAR error correction method on English corpus across different upstream ASR systems. For example, PATCorrect achieves 11.62% WER reduction (WERR) averaged on 3 ASR systems compared to 9.46 % WERR achieved by other method using text only modality and also achieves an inference latency comparable to other NAR models at tens of millisecond scale, especially on GPU hardware, while still being 4.2 -6.7x times faster than autoregressive models on Common Voice and LibriSpeech datasets.

1. INTRODUCTION

Automatic speech recognition (ASR) models transcribe human speech into readable text. It has many applications including real-time captions and meeting transcriptions. ASR model is also a critical component in large-scale natural language processing (NLP) systems like Amazon Alexa, Google Home and Apple Siri. Transcribed text serves as input for downstream models such as intent detection in voice assistants and response generation in voice chatbots. Errors made in speech-totext ASR transcriptions can severely impact the accuracy of downstream models and thus lower the performance of the entire NLP system. 2019) have achieved state-of-the-art (SOTA) accuracy as measured by word error rate (WER). However, due to the complexity of human natural language and the quality of speech audios, even SOTA ASR systems can still make unavoidable and unrecoverable errors such as phonetic confusion between similar-sounding expressions. To improve the quality of ASR transcriptions, error correction models are applied to the outputs from ASR systems to detect and correct errors. ASR error correction can be formulated as a sequence-to-sequence generation task, taking the ASR transcribed text as input source sequence and the ground-truth speech-to-text transcription as target 2020) have proposed sequence-to-sequence models that decode the target sequence in an autoregressive (AR) manner. Wang et al. (2020) added phoneme information to the AR decoder and found that it helps retrieve the correct entity from ASR transcriptions. These autoregressive models achieve SOTA accuracy but incur high latency making them infeasible for online production systems with lowlatency constraints. For example, for voice digital assistants the end-to-end latency for a response is at the order of milliseconds for high quality user experience. Hence when incorporating such error correction models into the whole system, we need to seriously consider the speed and accuracy trade-off. Autoregressive decoding is a big bottleneck as it cannot be parallelized during inference, which does not meet the latency buffer allocated to the ASR error correction component in the endto-end pipeline. Therefore, the critical need of reducing latency brings us the strong motivation to use non-autoregressive (NAR) models over AR models. Leng et al. ( 2021) applied a NAR sequence generation model with edit alignment to Chinese corpus that achieved comparable WER reduction and is 6 times faster than AR models. However, the performance of this NAR approach has not been tested for English corpus. In this paper, we propose PATCorrect (Phoneme Augmented Transformer for ASR error Correction) as shown in Figure 1 , a novel NAR based ASR error correction model with edit alignment that is based on both text and phoneme representations of the ASR transcribed sentences. PATCorrect creates inputs for the length tagging predictor by applying a multi-modal fusion approach to combine phoneme representation and text representation into joint feature embeddings. Both encoders (text and phoneme) interact with NAR decoder via encoder-decoder attention mechanism. PATCorrect improves the WER reduction (WERR) to 11.62% compared to the FastCorrect which is the SOTA NAR method that solely uses text only representation of the input, with comparable inference latency at tens of milliseconds scale. PATCorrect model is robust and scalable to different upstream ASR systems. We use three ASR systems to transcribe two public English corpus datasets, LibriSpeech and Common Voice, respectively to get their erroneous transcriptions as inputs. Experimental evaluations demonstrate that applying PATCorrect can consistently improve the transcription WER across different upstream ASR models with varying levels of transcription quality. To demonstrate our performance improvement, we benchmark against other ASR error correction models by applying them to the same sets of erroneous transcriptions. Our contributions are summarized as follows: • We propose PATCorrect, a novel model based on the Transformer architecture for NAR ASR correction. This model uses a multi-modal fusion approach that augments the traditional input text encoding with an additional phoneme encoder to incorporate pronunciation information, which is one of the key characteristics for spoken utterances. • Through extensive offline evaluations, We demonstrate that PATCorrect outperforms the state-ofthe-art NAR ASR error correction model that uses text only modality. For example, PATCorrect improves WERR to 11.62% with an inference latency at the same tens of milliseconds scale, while still being about 4.2 -6.7x times faster than AR models. • To the best of our knowledge, we are the first to establish that multi-modal fusion is a promising direction for improving the accuracy of low latency NAR methods for ASR error correction, and comprehensively study the performance of NAR ASR error correction for English corpus across different ASR systems with varying levels of quality.

2. RELATED WORK AUTOREGRESSIVE METHODS

The goal of ASR error correction is to convert erroneous source sequences from ASR outputs to target sequences with errors corrected. It can be viewed as Neural Machine Translation (NMT) problem with erroneous sentences as source language, and corrected sentences as target language. Therefore, research on ASR error correction started with conventional statistical machine translation methods. Cucu et al. (2013) applied it in domain-specific ASR systems for error correction. Anantaram et al. (2018) 



Recent advances in ASR systems using Transformer Gulati et al. (2020); Tüske et al. (2021) and CNN based models Li et al. (

sequence. Previous studies D'Haro & Banchs (2016); Liao et al. (2020); Mani et al. (

further utilized ontology learning to repair ASR outputs by a 4-step method. Recent NMT methods based on Transformers Vaswani et al. (2017); Ng et al. (2019) have become

