Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition
- Resource Type
- Conference
- Authors
- Baskar, Murali Karthick; Burget, Lukas; Watanabe, Shinji; Astudillo, Ramon Fernandez; Cernocky, Jan Honza
- Source
- ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2021 - 2021 IEEE International Conference on. :6753-6757 Jun, 2021
- Subject
- Bioengineering
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Training
Conferences
Speech recognition
Speech enhancement
Signal processing
Data models
Acoustics
cycle-consistency
self-supervision
sequence-to-sequence
speech recognition
- Language
- ISSN
- 2379-190X
Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR→TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS→ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6% and 2.7% on Librispeech and BABEL respectively.