Full-Reference Speech Quality Estimation with Attentional Siamese Neural Networks
- Resource Type
- Conference
- Authors
- Mittag, Gabriel; Moller, Sebastian
- Source
- ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2020 - 2020 IEEE International Conference on. :346-350 May, 2020
- Subject
- Signal Processing and Analysis
Deep learning
Training
Neural networks
Predictive models
Network architecture
Speech processing
Internet telephony
speech quality
deep learning
Siamese networks
attention
CNN-LSTM
- Language
- ISSN
- 2379-190X
In this paper, we present a full-reference speech quality prediction model with a deep learning approach. The model determines a feature representation of the reference and the degraded signal through a Siamese recurrent convolutional network that shares the weights for both signals as input. The resulting features are then used to align the signals with an attention mechanism and are finally combined to estimate the overall speech quality. The proposed network architecture represents a simple solution for the time-alignment problem that occurs for speech signals transmitted through Voice-Over-IP networks and shows how the clean reference signal can be incorporated into speech quality models that are based on end-to-end trained neural networks.