dc.description.abstract |
Large indoor spaces having complex layouts are often difficult to navigate. Indoor spaces in hospitals, universities, shopping complexes, etc., carry multi-modal information through text and symbols. Hence, it is difficult for Visually Impaired people to independently navigate such spaces. Indoor environments are usually GPS-denied; therefore, Bluetooth-based, WiFi-based, or Range-based methods are used for localization. These methods incur high setup costs, lack good accuracy, and sometimes need specialized sensing equipment. We propose a Visual Assist (VA) system for the indoor navigation of BVI individuals using visual fiducial markers for localization. State-of-the-art (SOTA) approaches for localization using visual fiducial markers use fixed cameras having a limited field of view. We employ a Pan-Tilt turret-mounted camera, which provides a 360° field of view for enhanced marker tracking. We, therefore, need fewer markers for mapping and navigation. We further use our localization model for enhancing existing SLAM methods, namely, Hector SLAM, ORBSLAM and UCOSLAM. The efficacy of the proposed system is measured on three metrics, i.e., Root Mean Square Error(RMSE), Average Distance to Nearest Neighbours (ADNN), and Absolute Trajectory Error (ATE). The proposed system offers accurate trajectory tracking upto ±8cm . ADNN and RMSE of Hector SLAM, ORB-SLAM, and UcoSLAM improve by 9.1%, 8.9%, and 7%, respectively while ATE is reduced by 6.7%, 4.5%, and 5.2%. |
en_US |