DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/xmlui/handle/123456789/8227
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, Yashvardhan-
dc.date.accessioned2023-01-02T11:01:54Z-
dc.date.available2023-01-02T11:01:54Z-
dc.date.issued2018-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S1877050918307518?via%3Dihub-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/8227-
dc.description.abstractWith exploding textual data on the internet with e-books, legal documents and products information, it is an opportunity to harness it for applications which can aid human tasks. Developing systems for question generation can be used for making frequently-asked-questions, creating school quiz-es and serve for the purpose of unified AI. Here in this study various encoder decoder architectures for generating questions from text inputs have been explored using Stanford’s SQuAD dataset as for training development and test sets and evaluation metrics such as BLEU, ROUGUE and training time were used to compare the effectiveness of the models. The article develops upon the work of current end-to-end system by using gated recurrent unit in place of long short term memory which give similar accuracy but with lesser training time, further it also show the successfully use of a convolution based encoder for this task which gives results comparable to current state of the art system with much lesser training time.en_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectComputer Scienceen_US
dc.subjectAutomatic Question Generationen_US
dc.subjectNeural networksen_US
dc.subjectLanguage Generationen_US
dc.subjectNatural Language Processingen_US
dc.titleEncoder-Decoder Architectures for Generating Questionsen_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.