Improving performance of Recurrent Neural Networks for Question-Answering with Attention-based Context Reduction
- Resource Type
- Conference
- Authors
- Modak, Sagnik; Chaudhury, Sujata; Rawat, Abhishek; Deb, Suman
- Source
- 2021 IEEE Mysore Sub Section International Conference (MysuruCon) Mysore Sub Section International Conference (MysuruCon), 2021 IEEE. :723-728 Oct, 2021
- Subject
- Communication, Networking and Broadcast Technologies
Computing and Processing
Engineering Profession
Fields, Waves and Electromagnetics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Text analysis
Recurrent neural networks
IEEE Sections
Memory management
Focusing
Natural language processing
Language Models
Context Reduction
Attention
Question-Answering
Evidence
- Language
In this paper a method is introduced to reduce the context for a context-based question-answering model. Natural Language Processing (NLP) has progressed significantly in last few years. Question-Answering (QA) is a task of Text Analysis which in turn is a large field of NLP. There are many QA models produced till date. Though these models were successful in performing various QA tasks with significant accuracy, these models are based on key-value memory networks and have large external memory requirement. The main motivation of the research is to overcome the limitations of the previous models of excessive memory usage. The proposed algorithm uses attention for context reduction. By focusing on reduced context, only a limited set of sentences have to be processed which reduces the external memory requirement and speeds up the inference time. The method can be used with any existing end-to-end training model thus requiring no additional supervision. When used with an RNN model, it reduces the memory requirement of the model and also decreases the training time. The proposed method also provides evidences in support of the answer.