DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19201
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKumar, Dhruv-
dc.contributor.authorChalapathi, G.S.S.-
dc.date.accessioned2025-08-14T10:34:46Z-
dc.date.available2025-08-14T10:34:46Z-
dc.date.issued2025-05-
dc.identifier.urihttps://ieeexplore.ieee.org/abstract/document/11007557/authors#authors-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19201-
dc.description.abstractTransformers have emerged as a groundbreaking architecture in the field of computer vision, offering a compelling alternative to traditional convolutional neural networks (CNNs) by enabling the modeling of long-range dependencies and global context through self-attention mechanisms. Originally developed for natural language processing, transformers have now been successfully adapted for a wide range of vision tasks, leading to significant improvements in performance and generalization. This survey provides a comprehensive overview of the fundamental principles of transformer architectures, highlighting the core mechanisms such as self-attention, multi-head attention, and positional encoding that distinguish them from CNNs. We delve into the theoretical adaptations required to apply transformers to visual data, including image tokenization and the integration of positional embeddings. A detailed analysis of key transformer-based vision architectures such as ViT, DeiT, Swin Transformer, PVT, Twins, and CrossViT are presented, alongside their practical applications in image classification, object detection, video understanding, medical imaging, and cross-modal tasks. The paper further compares the performance of vision transformers with CNNs, examining their respective strengths, limitations, and the emergence of hybrid models. Finally, current challenges in deploying ViTs, such as computational cost, data efficiency, and interpretability, and explore recent advancements and future research directions including efficient architectures, self-supervised learning, and multimodal integration are discussed.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectComputer Scienceen_US
dc.subjectEEEen_US
dc.subjectTransformersen_US
dc.subjectComputer architectureen_US
dc.subjectComputer visionen_US
dc.subjectConvolutional neural networks (CNNs)en_US
dc.titleTransformers for vision: a survey on innovative methods for computer visionen_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.