Skip to main content
SearchLoginLogin or Signup

Transfer Learning and Language Model Adaption for Low Resource Speech Recognition

Published onJun 08, 2021
Transfer Learning and Language Model Adaption for Low Resource Speech Recognition
·

Abstract

We train an end-to-end recurrent neural network along with an integrated n-gram language model to perform automatic speech recognition on a low resource dataset containing telephone speech. The dataset presents many challenges because it is highly disfluent, contains unique accents and word choices, and is of poor audio quality. Our proposed method uses both transfer learning and language model adaption to obtain a 42.27% Word Error Rate (WER), which improves on existing models (60.18% WER, 74.89% WER) and low resource models (79.82% WER). Nonetheless, the WER is much higher than the current benchmark for high resource languages. Thus, more research is needed to overcome the obstacles low resource speech presents to high quality automatic speech recognition.

Article ID: 2021S13

Month: May

Year: 2021

Address: Online

Venue: Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL: https://caiac.pubpub.org/pub/xzlgxkf1/

Comments
0
comment
No comments here
Why not start the discussion?