OpenVNAVI: A Vibrotactile Navigation Aid for the Visually Impaired


Bachelor Thesis

OpenVNAVI: A Vibrotactile Navigation Aid for the Visually Impaired
Author: David Antón Sánchez Supervisor: René Bohne

Download Thesis

PDF
GitHub
Abstract:
According to the World Health Organization 285 million people worldwide are estimated to have a degree of visual impairment, with 39 million of them suffering from complete vision loss.

The current degree of technological development of many fields has brought and continues to bring comforts and life-changing improvements to our lives: smartphones, GPS, self-driving cars, etc. However, blind and visually impaired (BVI) people still rely on century-old methods for navigating the world on their own:

The white cane, a simple, affordable tool used by the BVI community as an obstacle detection device by means of direct interaction with the obstacles; and the guide dog, used as an obstacle avoidance and navigation aid, although not affordable for the vast majority of the BVI community.

The aim of this Bachelor Thesis is to create a low-cost system based on vibrotactile feedback that improves upon the functionality of the guide dog as an obstacle avoidance and a navigation aid, allowing more BVI users to navigate the world easily and safely.

First, an exploration of the related work from the past decades is performed, analyzing the state of the art and the current areas of research. Then, the author analyzes the requirements and describes the implementation of all the features and components of the system. After that, an evaluation of the implementation of the system is performed. Finally, an exploration of the possible future development is presented.

System Description:
OpenVNAVI is a vest equipped with a depth sensor and array of vibration motor units that allow people with visual impairment to avoid obstacles in the environment.

The ASUS Xtion PRO LIVE depth sensor, positioned onto the user’s chest scans the environment as the user moves. From the video feed of the depth sensor a frame is captured and then processed by the Raspberry Pi 2. Each frame is downsampled from 640x480 to 16x8 and each pixel is then mapped to a vibration motor unit forming an array positioned onto the user’s belly.

The grayscale value of each pixel on the lower resolution frame is assigned to a PWM voltage value generated by the Raspberry Pi 2 via PWM drivers that will drive each vibration motor obtaining a vibration amplitude value as a function of the proximity of an object.

With this method the vibration motor unit array is able to represent a vibratory image onto the user’s belly to help create a mental representation of the obstacles in the scene.

{IMG(src="/files/migrated/images/overview.PNG",height="800",width="600")}{IMG}
{IMG(src="/files/migrated/images/array1.JPG",height="600",width="800")}{IMG}

{IMG(src="/files/migrated/images/vest_front.PNG",height="800",width="600")}{IMG}

{IMG(src="/files/migrated/images/specimen.JPG",height="800",width="600")}{IMG}



We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.