DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16262
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGupta, Shashank-
dc.date.accessioned2024-10-28T10:09:18Z-
dc.date.available2024-10-28T10:09:18Z-
dc.date.issued2024-05-
dc.identifier.urihttps://link.springer.com/article/10.1007/s11042-024-19409-z-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16262-
dc.description.abstractObject detection (OD) in Advanced Driver Assistant Systems (ADAS) has been a fundamental problem especially when complex unseen cross-domain adaptations occur in real driving scenarios of autonomous Vehicles (AVs). During the sensory perception of autonomous Vehicles (AV) in the driving environment, the Deep Neural Networks (DNNs) trained on the existing large datasets fail to detect the vehicular instances in the real-world driving scenes having sophisticated dynamics. Recent advances in Generative Adversarial Networks (GAN) have been effective in generating different domain adaptations under various operational conditions of AVs, however, it lacks key-object preservation during the image-to-image translation process. Moreover, high translation discrepancy has been observed with many existing GAN frameworks when encountered with large and complex domain shifts such as night, rain, fog, etc. resulting in an increased number of false positives during vehicle detection. Motivated by the above challenges, we propose COPGAN, a cycle-object preserving cross-domain GAN framework that generates diverse variations of cross-domain mappings by translating the driving conditions of AV to a desired target domain while preserving the key objects. We fine-tune the COPGAN training with an initial step of key-feature selection so that we realize the instance-aware image translation model. It introduces a cycle-consistency loss to produce instance specific translated images in various domains. As compared to the baseline models that needed a pixel-level identification for preserving the object features, COPGAN requires instance-level annotations that are easier to acquire. We test the robustness of the object detectors SSD, Detectron, and YOLOv5 (SDY) against the synthetically-generated COPGAN images, along with AdaIN images, stylized renderings, and augmented images. The robustness of COPGAN is measured in mean performance degradation for the distorted test set ( at IoU threshold = 50) and relative performance degradation under corruption (rPD). Our empirical outcomes prove a strong generalization capability of the object detectors under the introduced augmentations, weather translations, and AdaIN mix. The experiments and findings at various phases intend to the applicability and scalability of the domain adaptive DNNs to ADAS for attaining a safe environment without human intervention.en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.subjectComputer Scienceen_US
dc.subjectObject detection (OD)en_US
dc.subjectAdvanced Driver Assistant Systems (ADAS)en_US
dc.subjectDeep Neural Networks (DNNs)en_US
dc.titleEmploying cross-domain modelings for robust object detection in dynamic environment of autonomous vehiclesen_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.