Abstract
With the wide application of attention mechanism in multitudinous field of natural language processing (NLP), to date various deep neural networks based on this mechanism have been introduced and developed. However, a major problem with this kind of application is that a long time will be consumed due to the current networks still need to rely on their own ability to form attention values from scratch during the training. In this paper, we propose an auxiliary method called the Guided Attention Mechanism (GAM), which utilizes the prior knowledge to guide the network to form attention values in NLP field, thereby shortening the network training time and making the attention values more accurate. This work designed two sets of prior knowledge generation processes based on the regularization method and the deep learning method respectively. And the prior knowledge is used to guide the attention values of the original network in terms of values and angles. The experimental results show that compared with the original network, the classification accuracy of the network using GAM is improved by about 2%, and the training time is reduced by 5∼9%.
Get full access to this article
View all access options for this article.
