Most prosthetic approaches to blindness focus on conveying raw visual images to the subject. Examples include electrode arrays implanted in the retina and auditory or haptic sensory substitution devices. The hope is that despite unavoidable corruptions of the image, the brain will still be able extract the precious bits of actionable information needed for "seeing" and acting. As an alternative we are building a prosthesis in which a wearable computer compiles all the actionable information, and only those few bits are conveyed to the subject. The computer narrates the scene to the user in natural language: objects and their locations, faces and their identities, options for navigation. Our immediate goal is to enable blind people in their social interactions and indoor navigation in unfamiliar buildings. Secondly we hope that the narrative creates a rich mental image of the surroundings to convey a sense of seeing.
Our current system is called CARA (Cognitive Augmented Reality Assistant). In one mode, CARA implements a "virtual guide", a virtual object that moves through real space along a specified navigation route, staying two steps ahead of the user, with repeated calls of "follow me". This video shows that a congenitally blind subject can use our headset to navigate an unfamiliar corridor on the very first attempt at a walking speed normal for sighted people.
• Liu Y, Stiles NR, Meister M (2018) Augmented reality powers a cognitive assistant for the blind. Elife. 2018 Nov 27;7. doi: 10.7554/eLife.37841.