DSpace Repository

Vision-driven robotic grasping order generation using segmentation and relative positioning in a cluttered environment

Show simple item record

dc.contributor.author Sangwan, Kuldip Singh
dc.date.accessioned 2025-11-04T09:05:02Z
dc.date.available 2025-11-04T09:05:02Z
dc.date.issued 2025
dc.identifier.uri https://www.sciencedirect.com/science/article/pii/S1877050925016229
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/19955
dc.description.abstract In this paper, a multichannel vision-based approach for intelligent robotic grasping in cluttered environments is proposed. The experiments are conducted with an open-source synthetic dataset consisting of color and depth images to address this general problem. The proposed approach involves the use of a modified Cascade Mask R-CNN-based semantic segmentation model to detect and classify objects in the scene. The results show a high mAP@0.5-0.95 score of 93.85% for the customized Meta-Grasp dataset using this model. The captured depth data is processed based on the segmented mask regions to approximate their position in a 3D coordinate system. The affinity between the edge profiles is calculated to estimate the relation between the segmented objects in 3D space. This information is used to generate a priority order for object pickup such that only the objects in the top layer are picked first, followed by the underlying layers. The methodology was evaluated for various placement options for a 6-class subset of the dataset with a varying number of objects. The actual object classes and their mask positions were obtained successfully, and the priority order was calculated so that no lower-layered object was picked before the upper-lying object. Overall, the proposed two-stage decision pipeline has demonstrated its effectiveness in generating the pickup priority and sorting order for a multi-object scene and has potential applications in fully automated factories or smart manufacturing. en_US
dc.language.iso en en_US
dc.publisher Elsevier en_US
dc.subject Mechanical engineering en_US
dc.subject Convolutional neural networks (CNNs) en_US
dc.subject Mask R-CNN en_US
dc.subject Robotic arm en_US
dc.subject Smart Manufacturing en_US
dc.title Vision-driven robotic grasping order generation using segmentation and relative positioning in a cluttered environment en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account