DSpace Repository

Employing cross-domain modelings for robust object detection in dynamic environment of autonomous vehicles

Show simple item record

dc.contributor.author Gupta, Shashank
dc.date.accessioned 2024-10-28T10:09:18Z
dc.date.available 2024-10-28T10:09:18Z
dc.date.issued 2024-05
dc.identifier.uri https://link.springer.com/article/10.1007/s11042-024-19409-z
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16262
dc.description.abstract Object detection (OD) in Advanced Driver Assistant Systems (ADAS) has been a fundamental problem especially when complex unseen cross-domain adaptations occur in real driving scenarios of autonomous Vehicles (AVs). During the sensory perception of autonomous Vehicles (AV) in the driving environment, the Deep Neural Networks (DNNs) trained on the existing large datasets fail to detect the vehicular instances in the real-world driving scenes having sophisticated dynamics. Recent advances in Generative Adversarial Networks (GAN) have been effective in generating different domain adaptations under various operational conditions of AVs, however, it lacks key-object preservation during the image-to-image translation process. Moreover, high translation discrepancy has been observed with many existing GAN frameworks when encountered with large and complex domain shifts such as night, rain, fog, etc. resulting in an increased number of false positives during vehicle detection. Motivated by the above challenges, we propose COPGAN, a cycle-object preserving cross-domain GAN framework that generates diverse variations of cross-domain mappings by translating the driving conditions of AV to a desired target domain while preserving the key objects. We fine-tune the COPGAN training with an initial step of key-feature selection so that we realize the instance-aware image translation model. It introduces a cycle-consistency loss to produce instance specific translated images in various domains. As compared to the baseline models that needed a pixel-level identification for preserving the object features, COPGAN requires instance-level annotations that are easier to acquire. We test the robustness of the object detectors SSD, Detectron, and YOLOv5 (SDY) against the synthetically-generated COPGAN images, along with AdaIN images, stylized renderings, and augmented images. The robustness of COPGAN is measured in mean performance degradation for the distorted test set ( at IoU threshold = 50) and relative performance degradation under corruption (rPD). Our empirical outcomes prove a strong generalization capability of the object detectors under the introduced augmentations, weather translations, and AdaIN mix. The experiments and findings at various phases intend to the applicability and scalability of the domain adaptive DNNs to ADAS for attaining a safe environment without human intervention. en_US
dc.language.iso en en_US
dc.publisher Springer en_US
dc.subject Computer Science en_US
dc.subject Object detection (OD) en_US
dc.subject Advanced Driver Assistant Systems (ADAS) en_US
dc.subject Deep Neural Networks (DNNs) en_US
dc.title Employing cross-domain modelings for robust object detection in dynamic environment of autonomous vehicles en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account