Abstract
Recurrent Neural Networks (RNN) have claimed to achieve the state of the arts results in some cases, better performances than humans could have, especially RNN – Long Short Term Memory (LSTM) and RNN – Bidirectional LSTM, Attention based LSTM encoder-decoder networks in the domains of Speech Recognition, Sequence Labeling, Text Classification, Image Caption Generation and many more. The main focus of this paper is to present here a simple LSTM – Bidirectional LSTM joint model for Intent Classification and Named Entity Recognition (NER) with and without Convolutional Neural Network (CNN) as feature extractor. The aim of this experiment is to improve the accuracy of the model through inducing information from a well-performing model on a particular task to another model in a joint model framework and conclude if is there any correlation that might aid in syntactic and semantic structural learning of the task through the application of learned weights. The comparative results of models with and without CNN as feature extractors prepended to the models are tabulated as well.
Keywords
Get full access to this article
View all access options for this article.
