Speaker Characterization Using TDNN-LSTM Based Speaker Embedding
- Resource Type
- Conference
- Authors
- Chen, Chia-Ping; Zhang, Su-Yu; Yeh, Chih-Ting; Wang, Jia-Ching; Wang, Tenghui; Huang, Chien-Lin
- Source
- ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2019 - 2019 IEEE International Conference on. :6211-6215 May, 2019
- Subject
- Bioengineering
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Feature extraction
Neural networks
NIST
Training data
Training
Speaker recognition
Mel frequency cepstral coefficient
speaker embedding
TDNN-LSTM
NIST SRE2018
- Language
- ISSN
- 2379-190X
In this paper we propose speaker characterization using time delay neural networks and long short-term memory neural networks (TDNN-LSTM) speaker embedding. Three types of front-end feature extraction are investigated to find good features for speaker embedding. Three kinds of data augmentation are used to increase the amount and diversity of the training data. The proposed methods are evaluated with the National Institute of Standards and Technology (NIST) speaker recognition evaluation (SRE) tasks. Experimental results show that the proposed methods achieve a decision cost of 0.400 with the pooled SRE 2018 development set with a single system. In addition, by applying simple average score combination on the outputs of 12 systems, the proposed methods achieve an equal error rate (EER) of 5.56% and a minimum decision cost function of 0.423 with the SRE 2016 evaluation set.