Pre-trained Transformer models have become popular in various Natural Language Processing (NLP) tasks, following a two-step process of ’pre-training’ and ’fine-tuning’. However, with the abundance of information on the web, some domain knowledge may be lacking. This can result in poor performance during the fine-tuning step when there is limited training data available. To address this issue in the case of limited data, we propose a knowledge graph-based data expansion method that enables the model to achieve good results even when there is limited data in the fine-tuning step. We extract entities in the text through Named Entity Recognition and then search for related information in the knowledge graph to expand the text’s content. This allows the pretrained model to acquire more external knowledge and enhance its training. We used our data expansion method to conduct experiments on the following well-known models, i.e., BERT, RoBERTa, and GPT-3. Our experiments show that our approach can improve the accuracy of language models on text classification tasks when training data is limited.
Article ID: 2023L10
Month: June
Year: 2023
Address: Online
Venue: The 36th Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/qigu1tdk