User:Bkpsusmitaa/sandbox4

Back-up copy of my hard work

vOICe
The vOICe Auditory Display Technology is one of several approaches towards sensory substitution (vision substitution) for the blind that aims to provide synthetic vision to the user, analogous to seeing their surroundings using sound waves, by means of a non-invasive visual prosthesis.

vOICe is a technology that allows blind individuals to perceive, analogous to seeing, their surroundings using sound waves.

The vOICe Auditory Display maps live camera images into sounds. The brains of the blind users then learn to decode these generally extremely complex sounds as meaningful vision over time. Ideally, this would not only enable the experienced blind user understand the visual content, but also perceive it as truly visual, making it "feel" like vision.

It creates a pattern out of an image formed in the field of a head- or belt- mounted camera and uses “soundscape,” patterns of scores of different tones at different volumes and pitches emitted simultaneously.

The vOICe Auditory Display has a picture resolution of up to a few thousand pixels (e.g., 60 by 60 pixels) Additionally, it doesn't require any surgery (with the related dangers). Actually, its interface to the mind by means of sound is simply through headphones. The vOICe approach utilizes equipments that one can purchase in most PC shops: PC camera, subnotebook PC, smartphones and stereo earphones (COTS - Commercial Off the Shelf), which holds the cost down when contrasted with more devoted therapeutic equipments. The vOICe specifically expands upon Nature's own nanotechnology − the human hearing and the human brain. This is accomplished with none of the well-being- or safety-worries about conceivable risks associated with human-made nanoparticles and nanotechnology.

The vOICe converts live camera views from a video camera into soundscapes, patterns of scores of different tones at different volumes and pitches emitted simultaneously. This system uses general video to audio mapping by associating height to pitch and brightness with loudness in a left-to-right scan of any video frame. Views are typically refreshed about once per second with a typical image resolution of up to 60 x 60 pixels as can be proven by spectrographic analysis.

The device is based on the brain's adaptability, owing to its neuroplasticity, to learn to perceive imagery out of a pattern, and reinforce that learning through repeated experiences, using the principles of artificial synesthesia.

Over time with practice, the processing is gradually sent down to the subconscious levels and becomes automatic.

The technology has been positively and widely reviewed on the media, including newspapers and news websites, other websites and Science Magazines ''.

Presently, there is also an Android App available for the technology.

Neuroscience and psychology research    indicate recruitment of relevant brain areas in seeing with sound, as well as functional improvement through training.

The ultimate goal is to provide synthetic vision with truly visual sensations by exploiting the neural plasticity of the human brain. The system does not require any surgery. Its interface to the mind is simply headphones. Over time with practice, the processing within the brain is gradually sent down to the subconscious levels and becomes automatic.

Neuroscience research   has shown that the visual cortex of even adult blind people can become responsive to sound, and "seeing with sound" might reinforce this in a visual sense with live video from a head-mounted camera encoded in sound. The extent to which cortical plasticity indeed allows for functionally relevant rewiring or remapping of the human brain is still largely unknown and is being investigated in an open collaboration with research partners around the world.

One suggestion for increasing the relative efficiency of the resulting visual stimuli is to adjust the visual field by using an accelerometer to provide a steady image even if the head is moved, which is implemented in its Android edition. Connecting an infrared sensor to adjust the camera position to match eye movements is an option for the Windows edition (though affordable mobile eye-trackers are not yet on the market).

The technology of the vOICe was invented in the 1990's by Peter Meijer. It has been positively and widely reviewed in media        and also in various high-quality peer-reviewed scientific journals, far extensive than those partially referenced in the current section, and maintained by the inventor's website itself.

Note
For the articles and reports published on peer-reviewed Scientific Journals, each of the individually-referenced article needs to be downloaded and searched with the string, "vOICe", with enabled "Match Case".