Augmenting the sense of sight through spatial audio
Tristan Bangman is a Paralympic tandem cycling gold medallist. At a young age, he was diagnosed with optic atrophy, a condition that interferes with the optic nerve's ability to transmit impulses from the eye to the brain. As a result, his vision is similar to seeing the world through a very dense fog, emphasising the need for a pilot he can blindly trust to guide him.
Our brief was to design and develop a solution that would allow Tristan to navigate on a velodrome independently by augmenting his sense of sight through spatial sound.
Our approach
It has been proven that most blind and visually impaired people are particularly adept at accurately determining the source and distance of sound, and Tristan is no exception. Knowing this, we focused on developing a device that could translate the visual world into sound.
The track and the other racers are mapped by two LiDAR sensors on the front and back of the bike. Like those used in self-driving cars, the sensors act as a radar system and map the environment in a cloud of more than 40,000 sampled points per second. This enormous amount of data is sent to the cloud via a microprocessor and a mobile phone in a customised casing on Bangma's bike. The data is then rendered into 8D-audio.
Finally, the sound is beamed to Bangma's earphones, enabling him to gauge his surroundings from the intensity of the sounds he hears. The whole process takes only a fraction of a second and could be accomplished only because of the speed and stability of the 5G network.