We train an end-to-end recurrent neural network along with an integrated n-gram language model to perform automatic speech recognition on a low resource dataset containing telephone speech. The dataset presents many challenges because it is highly disfluent, contains unique accents and word choices, and is of poor audio quality. Our proposed method uses both transfer learning and language model adaption to obtain a 42.27% Word Error Rate (WER), which improves on existing models (60.18% WER, 74.89% WER) and low resource models (79.82% WER). Nonetheless, the WER is much higher than the current benchmark for high resource languages. Thus, more research is needed to overcome the obstacles low resource speech presents to high quality automatic speech recognition.
Article ID: 2021S13
Publisher: Canadian Artificial Intelligence Association