DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/8336
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNarang, Pratik-
dc.date.accessioned2023-01-06T06:53:22Z-
dc.date.available2023-01-06T06:53:22Z-
dc.date.issued2020-04-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0140366419318602-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0140366419318602-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/8336-
dc.description.abstractDue to the increasing capability of drones and requirements to monitor remote areas, drone surveillance is becoming popular. In case of natural disaster, it can scan the wide affected-area quickly and make the search and rescue (SAR) faster to save more human lives. However, using autonomous drone for search and rescue is least explored and require attention of researchers to develop efficient algorithms in autonomous drone surveillance. To develop an automated application using recent advancement of deep-learning, dataset is the key. For this, a substantial amount of human detection and action detection dataset is required to train the deep-learning models. As dataset of drone surveillance in SAR is not available in literature, this paper proposes an image dataset for human action detection for SAR. Proposed dataset contains 2000 unique images filtered from 75,000 images. It contains 30000 human instances of different actions. Also, in this paper various experiments are conducted with proposed dataset, publicly available dataset, and stat-of-the art detection method. Our experiments shows that existing models are not adequate for critical applications such as SAR, and that motivates us to propose a model which is inspired by the pyramidal feature extraction of SSD for human detection and action recognition Proposed model achieves 0.98mAP when applied on proposed dataset which is a significant contribution. In addition, proposed model achieve 7% higher mAP value when applied to standard Okutama dataset in comparison with the state-of-the-art detection models in literature.en_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectComputer Scienceen_US
dc.subjectdrone Surveillanceen_US
dc.subjectConvolutional neural network (CNN)en_US
dc.subjectObject detection (OD)en_US
dc.subjectAction recognitionen_US
dc.subjectAerial action dataseten_US
dc.titleDrone-surveillance for search and rescue in natural disasteren_US
dc.typeArticleen_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.