Please use this identifier to cite or link to this item:
http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16357
Title: | A Comparative Analysis of Transformer-Based Models for Document Visual Question Answering |
Authors: | Sharma, Yashvardhan |
Keywords: | Computer Science Visual Question Answering (VQA) Text Visual Question Answering (Text-VQA) Document Visual Question Answering (DocVQA) |
Issue Date: | Jun-2023 |
Publisher: | Springer |
Abstract: | Visual question answering (VQA) is one of the most exciting problems of computer vision and natural language processing tasks. It requires understanding and reasoning of the image to answer a human query. Text Visual Question Answering (Text-VQA) and Document Visual Question Answering (DocVQA) are the two sub problems of the VQA, which require extracting the text from the usual scene and document images. Since answering questions about documents requires an understanding of the layout and writing patterns, the models that perform well on the Text-VQA task perform poorly on the DocVQA task. As the transformer-based models achieve state-of-the-art results in deep learning fields, we train and fine-tune various transformer-based models (such as BERT, ALBERT, RoBERTa, ELECTRA, and Distil-BERT) to examine their validation accuracy. This paper provides a detailed analysis of various transformer models and compares their accuracies on the DocVQA task. |
URI: | https://link.springer.com/chapter/10.1007/978-981-99-0609-3_16 http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16357 |
Appears in Collections: | Department of Computer Science and Information Systems |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.