DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19232
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, Yashvardhan-
dc.date.accessioned2025-08-26T03:56:49Z-
dc.date.available2025-08-26T03:56:49Z-
dc.date.issued2025-07-
dc.identifier.urihttps://ieeexplore.ieee.org/abstract/document/11072709-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19232-
dc.description.abstractDeep learning algorithms have demonstrated exceptional performance on various computer vision and natural language processing tasks. However, for machines to learn information signals, they must understand and have enough reasoning power to respond to general questions based on the linguistic features present in images. Questions such as “What temperature is my oven set to?” need the models to understand objects in the images visually and then spatially identify the text associated with them. The existing Visual Question Answering model fails to recognize linguistic features present in the images, which is crucial for assisting the visually impaired. This paper aims to deal with the task of a visual question answering system that can do reasoning with text, optical character recognition (OCR), and visual modalities. The proposed Visual Question Answering model focuses on the image’s most relevant part by using an attention mechanism and passing all the features to the fusion encoder after getting pairwise attention, where the model is inclined toward the OCR-Linguistic features. The proposed model uses the dynamic pointer network instead of classification for iterative answer prediction with a focal loss function to overcome the class imbalance problem. On the TextVQA dataset, the proposed model obtains an accuracy of 46.8% and an average of 55.21% on the STVQA dataset. The results indicate the effectiveness of the proposed approach and suggest a Multi-Modal Attentive Framework that can learn individual text, object, and OCR features and then predict answers based on the text in the image.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectComputer Scienceen_US
dc.subjectVisual question answering system (VQA)en_US
dc.subjectText visual question answering system (Text-VQA)en_US
dc.subjectOptical character recognition (OCR)en_US
dc.subjectAttention mechanismen_US
dc.subjectNatural Language Processing (NLP)en_US
dc.titleA multi-modal attentive framework that can interpret text (MMAT)en_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.