Please use this identifier to cite or link to this item:
http://dspace.bits-pilani.ac.in:8080/jspui/xmlui/handle/123456789/8227
Title: | Encoder-Decoder Architectures for Generating Questions |
Authors: | Sharma, Yashvardhan |
Keywords: | Computer Science Automatic Question Generation Neural networks Language Generation Natural Language Processing |
Issue Date: | 2018 |
Publisher: | Elsevier |
Abstract: | With exploding textual data on the internet with e-books, legal documents and products information, it is an opportunity to harness it for applications which can aid human tasks. Developing systems for question generation can be used for making frequently-asked-questions, creating school quiz-es and serve for the purpose of unified AI. Here in this study various encoder decoder architectures for generating questions from text inputs have been explored using Stanford’s SQuAD dataset as for training development and test sets and evaluation metrics such as BLEU, ROUGUE and training time were used to compare the effectiveness of the models. The article develops upon the work of current end-to-end system by using gated recurrent unit in place of long short term memory which give similar accuracy but with lesser training time, further it also show the successfully use of a convolution based encoder for this task which gives results comparable to current state of the art system with much lesser training time. |
URI: | https://www.sciencedirect.com/science/article/pii/S1877050918307518?via%3Dihub http://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/8227 |
Appears in Collections: | Department of Computer Science and Information Systems |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.