Abstract:
This work investigates existing solutions tailored for the visually impaired, focusing on economically viable options for non-first-world communities. The exploration involves developing a real-time obstacle-tracking model using the YOLO (You Only Look Once) algorithm and Text-to-Speech synthesis to provide auditory cues. This effort yields improvements in assistive technology, though it still faces limitations in algorithmic precision and user feedback integration. The research paves the way for refining this technology and envisions its seamless integration into the daily lives of the visually impaired. The findings enhance the performance of assistive technologies, especially for distances less than 1.5 meters. The results show an inaccuracy of less than 10%, translating to a margin of 10−15 cm for objects located one meter away. This work thus provides increased independence and confidence for individuals with visual impairments in navigating and interacting with their surroundings.