Abstract
Text classification is a fundamental task in Nature Language Processing(NLP). However, with the challenge of complex semantic information, how to extract useful features becomes a critical issue. Different from other traditional methods, we propose a new model based on two parallel RNNs architecture, which captures context information through LSTM and GRU respectively and simultaneously. Motivated by the siamese network, our proposed architecture generates attention matrix through calculating similarity between the parallel captured context information, which ensures the effectiveness of extracted features and further improves classification results. We evaluate our proposed model on six text classification tasks. The result of experiments shows that the ABLGCNN model proposed in this paper has the faster convergence speed and the higher precision than other models.
Keywords
Get full access to this article
View all access options for this article.
