White paper: Why The vOICe vision BCI for the blind can and will likely defeat Neuralink Blindsight and other brain implants for restoring vision
Neuralink Blindsight and most other brain implants aimed at restoring vision to the blind work by implanting electrodes in (or on) the primary visual cortex (V1). A small current through an electrode can give a visual percept in the form of a phosphene, a flash of light. The assumption is that by stimulating a grid of many electrodes one can evoke the perception of an image, and thereby restore a form of vision. The vOICe visual-to-auditory sensory substitution approach instead uses normal hearing to convey images encoded in sound, and the human brain must first learn to decode these sounds to obtain a form of (functional) vision through sound-guided mental imagery. In the following, we will focus mostly on comparing The vOICe with Neuralink Blindsight, but much of what is said about the latter also applies to other brain implants in the visual cortex. The vOICe approach too is not without limitations and unknowns, but overall has many key advantages. To scientists: we greatly welcome your feedback, insights and particularly studies!
|
Neuralink Blindsight
|
The vOICe vision BCI
| |
|---|---|---|
| Invasive or non-invasive? |
Invasive: requires brain surgery with risks of infection, hemorrhage, seizures and death, as well as inevitably some damage to the cortical tissue where electrodes are inserted, both physical damage from the act of insertion and from possible subsequent inflammatory reactions and gliosis. Biocompatibility remains a major challenge.
Invasive BCIs can be characterized by a dimensionless "Butcher number" (Meister, 2019): the number ratio of damaged or killed neurons and stimulated or recorded neurons. Optogenetic approaches that avoid inserting electrodes introduce other risks, by for instance using viruses (viral vectors) to deliver genes into cells, potentially increasing cancer risk through mutations, while those genes themselves may trigger immune responses or prove cytotoxic through new protein expression.A Neuralink brain implant is a Class III medical device. |
Non-invasive: no known medical risks beyond the avoidable risk of hearing damage
when sounds are played at high volume for prolonged periods of time, like with loud music. No neurons are damaged or killed. The approach is 100% biocompatible.
The vOICe is not a medical device. The vOICe enables training for functional vision by conveying live visual imagery through sound, but it is not intended to treat, cure, or prevent any disease or condition - including blindness. |
| Messy or orderly? |
Messy: phosphenes (light flashes) at somewhat random-offset positions (like "stars-in-the-sky"), with variable sizes and shapes, sometimes flashing, with non-veridical colors and with gaps between sometimes partially overlapping phosphenes. Phosphenes are further affected by representational drift, causing neural response changes that may require frequent recalibration to undo. Moreover, the apparent
view location shifts
with eye movements,
which can be quite disorienting unless properly compensated for by
eye-tracking
with corresponding changes in displayed view content.
Quality of vision will be far worse than early Nintendo graphics or Atari graphics at first, and will in all likelihood never exceed or even approach normal human vision. |
Orderly: sounds appearing at perfectly orderly and uniform spatial and temporal positions through pitch (tone frequency), stereo position and time. Negligible if any representational drift at the perceptual level. ( ChatGPT agrees, PDF.)
Eye movements do not affect the view, but eye-tracking in combination with zoom slice selection and a higher scan rate could be used to create the auditory equivalent of eye saccades, giving a higher effective visual resolution despite the lower resolution per shortened soundscape scan. It is still unknown to what extent this would make seeing with The vOICe more intuitive and easy to master. |
| See light? |
Yes: visual percepts (light flashes) for late-blind people right upon first-time use, giving a great honeymoon experience, but it does
not necessarily
yield interpretable images. Meaningful "vision" may or may not come much later after extensive practice and rehabilitation, and artificial vision results may
disappoint given the hype.
An analogy is what happened with the Argus II retinal implant of Second Sight Medical Products (out of production since 2019): initially, blind recipients were thrilled to perceive light and flashes of light again, but subsequently they never developed functional vision beyond mere light blob detection, but that finding was long after the initial mass media coverage. The only 2 blind Russian Argus II recipients were beaten by a blind user of The vOICe vision BCI glasses at the Neurothlon 2018 contest in Samara, Russia. |
Not from the start: initially perceived as an annoying cacophony of "just sound". The focus is on functional vision. Visual percepts and vision may or may not arise from extensive
Some experienced blind users have reported having visual percepts (cf. "Visual experiences in the blind induced by an auditory sensory substitution device"), and the sounds may become mostly unnoticeable over time as the brain becomes increasingly aware of the visual information content rather than the auditory information carrier. The qualia of vision in the form of regaining sensations of "light" can be psychologically important for late-blind people, but defining and measuring it remains an ongoing and partially philosophical debate for which there is no scientific basis for answers yet, nor definite statistical answers from long-term blind users of The vOICe. There are no well-defined neural correlates of visual qualia yet. E.g. congenitally blind people show activity in their "visual" cortex (occipital lobe) upon sensory input from touch or hearing, unrelated to visual input and unrelated to reporting visual sensations. |
| Late-blind and congenitally blind? |
Works only for late-blind people - no matter what Elon Musk
claims.
Congenitally (totally) blind people do not get visual percepts from electrically stimulating their "visual" cortex (occipital cortex), because neural plasticity caused it to be recruited for other, non-visual, purposes such as touch or hearing. There is zero evidence that visual percepts can still arise at a later stage in congenitally blind people. (An intermediate case is that of one late-blind Dobelle brain implant recipient, Dennis, who went blind first during adolescence, who initially heard beeps instead of seeing phosphenes upon stimulation of his visual cortex while his visual cortex got re-recruited for vision.) Stimulation of occipital cortex of proficient early blind Braille readers gave tactile sensations in the fingers instead of perceiving phosphenes (light). People who were early blind from cataracts - leaving some color and light perception - and later had their cataracts removed regained only very limited functional vision. Even after normal eyesight until the age of 3 and blindness thereafter, brain plasticity can wipe out existing well-developed functional vision that barely recovers after restoring eyesight in adulthood. Case in point: Mike May ( UW news). |
Works for both congenitally blind and late-blind people.
This assumes that they have more or less normal hearing, but some existing blind users are hard of hearing and use hearing aids, so this is a soft requirement. It does not require visual percepts nor neuroplasticity to understand that a rising (falling) tone means a rising (falling) bright line, such that congenitally blind people can learn to use it as well to acquire at least some functional vision. For instance to locate and reach for a coffee mug on a table top without groping, to orient toward a doorway or reach for a shiny door handle, track a walking path, spot the stripes at a marked crosswalk (zebra crossing), to some extent identify face-shapes, etc., etc. |
| Textures and shading? |
Cannot convey textures and shading, only a couple of lines and edges.
Conveying textures, shading and uniform areas would require simultaneously activating many electrodes above phosphene threshold, but this would likely evoke seizures (cf. Dobelle implant, Orion I implant) or could even cause death. E.g., activating 10,000 electrodes with a 10-20 μA (microampere) suprathreshold electrical current per electrode would amount to a total current injection of 100-200 mA (milliampere) - although seizure risk would also depend on current return paths. Mitigation through temporal interleaving would go at the expense of effective frame rate. Mitigation through early seizure onset detection would likely imply immediate view degradation to prevent full seizures. Therefore edge detection algorithms are first applied to the camera view to limit the maximum number of simultaneously active electrodes, making anything resembling normal vision impossible. When looking up at a clear blue sky you would probably see... nothing, because there are no edges. |
Can convey
textures, shading and smooth surfaces without any restriction.
Shading and smooth surfaces give smooth noise-like sounds. Textures give characteristic complex dense "rhythms", and can be perceptually more useful than edges when following a path, where edges of surrounding objects or of sharp path-crossing shadows from buildings or trees can be very confusing. |
| Scalable? |
Barely scalable, and very expensive.
Expensive surgery with special implant hardware limits the potential market to a small number of rich countries with a low incidence of near-total blindness, and even there further limited by reimbursement policies in health insurance, because even there few blind people have the financial resources to pay for the brain implant themselves. Surgery for a visual cortical prosthesis is further complicated by the fact that part of the human visual cortex (especially the foveal part) lies hidden within the banks of the calcarine sulcus, while size and mapping of primary visual cortex is unique to each individual. Elon Musk may in line with his general ambitions force the company to move on to greener (more lucrative, mass-market) pastures once the technology has been proven to work, somewhat. |
Highly scalable at extremely low cost.
The vOICe can already run on billions of existing devices: The vOICe for Windows on PCs (over 100,000 downloads), The vOICe on Google Play The vOICe for Android (over 500,000 installations from Google Play, and with APK file for side-loading) on Android smartphones, tablets and smartglasses, and The vOICe progressive web app running on "anything" with a modern browser, i.e., not only on PCs and Android devices, but also on iPhones and iPads (Safari browser), Linux devices, Raspberry Pi devices, etc. Many blind people, even in developing countries, already have at least a smartphone that is compatible with The vOICe. Smartphones can be made head-mounted using a cheap VR headset holding the phone. Not convenient, but highly affordable. |
| Device lifetime? |
Device lifetime is still unknown, likely only several years.
Previous work with implanted penetrating electrodes in visual cortex showed effective lifetimes of up to several years (often less), but much depends on the specific proprietary electrode technology. Electrodes typically show signs of corrosion, cracks and other damage after prolonged use in vivo, with best results thus far reported using PEDOT electrode coatings. One cannot do accelerated longevity (lifetime) testing of brain implants in vivo, so with any new brain implant technology one can only wait for years and hope for the best. Penetrating electrodes may also retract and fail due to movement of the brain relative to the skull. Only the external components can be replaced without surgery, but may become obsolete once the company decides to no longer support the hardware. The implant itself cannot be replaced without yet another brain surgery, each time leading to some additional tissue damage in the visual cortex. Research into long-term biocompatibility of penetrating electrodes is ongoing. |
Device lifetime is effectively unlimited.
Using globally available mature hardware such as smartphones and stereo earbuds, lifetime per hardware component is likely on the order of a decade or more, while replacing the hardware is easy and affordable. Some basic but fully functional versions of the software are open source as well as publicly archived in for instance the Internet Archive, so one may reasonably expect to be able to use the technology during one's whole (human) lifetime. This also means that independent scientific studies based on use of The vOICe are less likely to become outdated or irrelevant while companies and their products come and go. Musicians can have a long life performing at a high level if they take care about their hearing. |
| Training support? |
Unspecified support for rehabilitation training. "Vision" with a brain implant is totally different from normal vision or even low vision eyesight, so it will after recovery from brain surgery require extensive rehabilitation to learn to make good, practical and safe use of the brain implant in daily living situations.
Both users of The vOICe and brain implant recipients can perform online basic target practice for camera-hand coordination using the Touch-the-Square Touch-the-Square online training page vision BCI web page for square localization. It works in any modern browser (Chrome, Firefox, Edge, etc) on any platform (Windows, Android, iOS, etc). Square localization was one of the tests that were in the past used with Argus II retinal implant recipients. With The vOICe this is best done running The vOICe for Android on smart glasses while watching a computer monitor, but users of The vOICe for Windows can alternatively apply the menu Options | Sonify GUI | Active window client (CTRL-F9) while wearing stereo headphones and with focus on the tool window while using a mouse. |
The vOICe too requires extensive training for best results.
There is still lack of 1-on-1 training support that would seem to best fit local blindness institutes already involved in training for safe blind use of the white cane and/or guide dog, and other daily living skills.
An online English self-training manual for The vOICe is available, with translations into Russian, Portuguese and Chinese. Its use requires self-discipline, and results would greatly benefit from supplemental on-site feedback and continued motivation by sighted instructors, including existing O&M (orientation and mobility) instructors in your area. |
| Resolution? | Visual resolution is still unknown, but unlikely to benefit much from more than 1,000 - 10,000 electrodes in view of the fact that only lines and edges can be conveyed, to limit seizure risk. This would limit resolution to between 30 × 30 and 100 × 100 for a few lines and edges, but overlapping receptive fields may give a drastically lower effective resolution. Neuralink is reportedly planning to implant about 6,000 electrodes in their first BlindSight brain implant recipient (likely 3,000 electrodes per hemisphere for the left and right half of phosphene vision). |
Spectrograms of The vOICe soundscapes
Visual-to-auditory conversion is by default done at a resolution of 176 × 64 pixels,
equivalent to stimulating over 11,000 electrodes. The effective resolution
may lie between about 30 × 30 and 60 × 60, in part depending on actual image content, but theoretically
there are possibilities for higher resolution. However, even a resolution as low as 25 × 25 is already considered
adequate for ambulatory vision.
A visual acuity on the order of 20/160 to 20/240 seems achievable in combination with a field of view on the order of 60°, even without foveal enlargement. An acuity worse than 20/200 (or tunnel vision with a field of view smaller than 20°) defines legal blindness in the US, so an acuity slightly below or slightly above this acuity appears plausible. Importantly, unlike with human vision this acuity is uniform across the entire field of view, and in the visual periphery it can therefore even exceed normal vision. |
| Field of view? |
The field of view (FOV) is still unknown, but has with prior brain implants in visual cortex been limited to a cortical area corresponding to the fovea, perceptually giving severe tunnel vision (1° to 2° visual angle, about the size of a thumbnail at arm's length).
All image content is squeezed into this narrow field of view. Since the definition of legal blindness in the US applies when having a field of view smaller than 20°, the implant recipient would likely still count as legally blind no matter how high the visual acuity. |
The field of view (FOV) is typically 60° to 120° depending mostly on the choice of camera.
For all vision systems there is an inevitable trade-off between visual acuity and field-of-view, but as stated above a visual acuity on the order of 20/160 to 20/240 seems achievable in combination with a field of view on the order of 60°. |
| Frame rate? | Frame rate is still unknown, but neural recovery time after stimulation (refractory period plus perceptual fading upon repeated stimulation) may limit effective frame rate to, say, 4 to 8 frames per second (4-8 fps). Neural habituation (reduced neural response to repeated stimuli) may further substantially reduce effective frame rate and bandwidth. The need for extensive (head) scanning to compensate for a narrow field of view may defeat any frame rate advantage. | Frame rate is by default 1 frame per second (1 fps), optionally set to a higher 2 or 4 fps (or even higher) in a trade-off with effective visual resolution. |
| Masking? |
Need not cause sensory masking, but this only applies to totally blind people.
Visual cortex stimulation likely interferes with and degrades natural low vision, for instance because the field of view of both types of sensory input does not match, but other forms of detrimental interference seem likely. |
Some degree of masking of hearing natural sounds is inevitable.
Minimized by using a modest sound volume in relation to ambient sound levels, and by using bone conduction headphones that do not block echolocation and natural hearing. The gains in functional vision should outweigh auditory masking effects. |
| Ambiguity? | Unknown apart from general visual ambiguities and illusions, and the limitations of ultralow vision (ULV). |
Very rare, and easily disambiguated.
In rare cases, soundscapes of for instance horizontal line segments may sound similar to environmental sounds such as the beeping of a truck backing up. If you then look slightly up or down and the pitch changes, you know it is not a truck backing up. |
| Literature? |
Virtually
no
peer-reviewed scientific literature. It is for believers or NDA-signers.
Also note that the term "blindsight" as such has been in use for many years in the scientific and other literature to describe the phenomenon that people who are totally blind due to damage to their visual cortex, i.e. cortical blindness, sometimes still can "guess" better than chance about the presence or properties of visual items in front of them, thereby for instance avoiding obstacles through a kind of subconscious vision. This should make (always case-insensitive) trade-marking the term by Neuralink in the close context of a crude form of vision for the totally blind impossible. Similar comments apply to attempts to trademark for instance the terms telepathy and telekinesis. |
Extensive peer-reviewed scientific literature.
Functional MRI of brain activity during a sensory substitution task, in Functional MRI of sensory substitution in the blind (2018)We welcome independent studies from all angles, without strings, non-disparagement clauses or NDAs attached, and yes, the learning curve and (for late-blind users) qualia barrier remain major concerns and uncertainties. To what extent is it possible to nudge even the adult human brain toward new modes of processing through functional (not physical) rewiring? Guide brain states or traveling brainwave constellations? Join the quest! |
| Patient/user reports? |
Blind patient reports? Nothing yet at the time of writing.
The lack of any video demo in the Neuralink Update, Summer 2025 of a monkey performing even a simple visual task with its Neuralink Blindsight brain implant suggests that current results are still very poor. E.g. "At least two-thirds of the time, the monkey moved its eyes toward something researchers were trying to trick the brain into visualizing" ( Bloomberg, June 2025). In earlier presentations, Neuralink was quite quick to showcase a video of a monkey controlling a cursor and playing Pong ( Monkey MindPong, April 2021). |
Blind user reports in numerous places and contexts.
Former Dobelle brain implant recipient using The vOICe vision BCI
For example, former Dobelle brain implant recipient Jens Naumann on YouTube (playlist), blind user Pat Fletcher on YouTube, congenitally blind user Pranav Lal on YouTube and Pranav Lal's blog, Russian blind user Vadim Artsev on YouTube (playlist), and congenitally blind user Nimer Jaber on YouTube and Nimer Jaber's blog. The videos include various examples of functional vision, i.e. performing vision-based tasks in ways that would normally have required eyesight or an AI model interpreting the camera view. |
| Deaf-blind users? |
No known reason why a brain implant in visual cortex would not work for late-blind people who are also profoundly deaf.
|
The vOICe assumes more or less normal hearing, but is known to be also used by totally blind people who are hard of hearing, in combination with hearing aids.
The vOICe will not work for people who are profoundly deaf, but may work to some extent for totally deaf-blind people who received a cochlear implant. There are no known reports about this yet. |
| Cortical blindness? |
A brain implant in the visual cortex requires a functioning visual cortex to restore some form of vision. It cannot work for people with cortical blindness.
Blindness caused by stroke, trauma or other causes of visual cortex damage will be an exclusion criterion. |
The vOICe likely activates visual cortex when visual cortex is functional, but should still offer functional vision to people with cortical blindness.
For example, understanding that a rising (falling) tone means a rising (falling) bright line is possible from hearing alone, requiring no (mental) visual imagery nor brain plasticity. Blind people with aphantasia may therefore be able to use it as well. |
| AI support? | Unknown. |
AI-based recognition of camera view content can be extremely useful but is not always reliable or sufficient, and is therefore considered supplemental and optional.
AI models may hallucinate and should never be relied upon.
The vOICe for Android bundles support for offline real-time talking OCR (text recognition), face detection, basic object recognition and barcode & QR code recognition. More advanced third-party AI apps for object recognition and scene description (e.g. Microsoft Seeing AI or Google Lookout) can be launched from within The vOICe for Android. |
| Funding? | Essentially unlimited funding as long as Elon Musk believes in it. He is likely after bigger fish than the niche market of Neuralink Blindsight for totally blind people. |
Available and sustainable indefinitely even without funding.
See also the section about device lifetime. |
| Animal experiments? |
Experiments with non-human primates and other animals are required to establish safety and thereby gain regulatory approval for use in humans.
In the past there has been a lot of controversy about Neuralink's animal experiments although Neuralink denied animal cruelty claims. |
No animal experiments are required nor needed for safe deployment of The vOICe with humans.
That said, it is possible that animal experiments are performed by independent institutes in the context of neuroscience studies on for instance crossmodal brain plasticity and functional rewiring. |
| Ethical issues? |
The above and other aspects of brain implants raise numerous
ethical concerns beyond the need for animal experiments.
There can for instance be harmful medical side-effects as described above, unwarranted expectations due to media or company hype, the implant may fail and device support may end (e.g. due to company bankruptcy or newer products), image pre-processing to keep only a few selected edges may inadvertently filter out important safety cues, etc, etc. |
Expectation management toward and by the (prospective) blind user.
Expectation management is hampered by lack of population data on likely end results: training effort in relation to (perceived) benefit might prove unacceptable or inadequate. Also, blind users may misinterpret the visual content of soundscapes, so for their own safety they should never rely solely on The vOICe in mobile use but should keep relying on their cane, guide dog and/or trusted sighted guide. Lack of resources for 1-on-1 training support for best rehabilitation results is a concern. |
| Availability? |
Unavailable at the time of writing. No FDA approval (for US) nor CE mark (for Europe) yet, and no patient has received a Blindsight implant as part of a clinical trial yet.
Your can join the
Neuralink for Visual Prosthesis patient registry.
Neuralink received Breakthrough Device designation from the FDA for the Blindsight brain implant, allowing for closer interaction with the FDA for feedback on medical device development. It is not FDA approval for market introduction. Also keep in mind that the FDA only compares with existing medical interventions, and does not consider non-medical approaches or solutions. |
Already globally available, actively used (but anonymously unless the user reports) on many thousands of devices, and compatible with billions of existing devices, ranging from PCs and smartphones to stand-alone AR smart glasses. There is even a version of The vOICe bundled in the free audioScreen add-on for the NVDA screen reader for the blind ( GitHub source code).
Meta should in due course lift their current policy constraints on using third-party apps with pixel-level camera access on their Meta Quest 2 and Quest 3 devices as well as their Meta Ray-Ban glasses, because these currently block blind people from using The vOICe sensory substitution on Meta devices (both Android app and web app). In the United States this may require lawsuits for blind accessibility based on the Americans with Disabilities Act (ADA) and Section 508. Privacy arguments do not apply because The vOICe works fine when blocking all camera view storage or broadcast at the system level. The vOICe merely locally transforms visual input into auditory output for blind accessibility of vision. |
Connecting the dots
Note that The vOICe and Neuralink Blindsight (and other brain implants) need not be mutually exclusive. A hybrid approach is conceivable where Neuralink Blindsight and The vOICe share the same camera view, such that the distorted and limited visual view of a Neuralink Blindsight brain implant can be supplemented with rich and undistorted visual feedback from The vOICe's soundscapes, including for instance the textures and shading missing from an edges-only Blindsight implant view. This combination may help the blind user to interpret the live camera view. Even OpenAI's ChatGPT (PDF) agrees, but concludes that for the time being non-invasive sensory substitution will remain the superior choice (PDF) for restoring functional vision to blind individuals. Furthermore, hyped super-human abilities such as "predator vision" by perceiving light outside the spectrum of visible light, e.g. infrared for thermal vision, or ultraviolet, are trivially dependent only on the choice of camera, i.e. choice of sensory input device, and therefore apply equally to Neuralink Blindsight and The vOICe. The same holds for using depth mapping cameras (LiDAR, stereo vision, TOF cameras) or AI depth mapping models for easier obstacle detection. Some outputs generated by other AI models such as Deepseek's Deepseek, Anthropic's Claude, xAI's Grok and Google's Gemini have been archived for future reference. A theoretically more promising hybrid approach might be to combine The vOICe with yet another non-invasive approach such as transcranial focused ultrasound stimulation (tUS/tFUS). ChatGPT 4o: Best technological options for blind people to see The phased array technology being developed by Nudge might lead to phosphene vision in V1 of comparable visual resolution through ~1 mm ultrasound focusing accuracy, provided its safety in chronic use can be ensured, patient-dependent focal aberration by the skull can be accounted for, its bulky helmet form factor be made aesthetically acceptable, and movement of the human brain inside the skull can be tracked and compensated for in real-time. Vision is very multi-faceted, and it is naive to presume that any single visual prosthesis approach will address all aspects of human vision. Also keep in mind that hardware eventually fails, while software eventually works.
In May 2025, ChatGPT (PDF) concluded: "If you want real-world, functional, trainable 'vision' for the blind - especially for totally or congenitally blind individuals - The vOICe and other visual-to-auditory sensory substitution systems are the most powerful technologies available today. Everything else is experimental, overly narrow, invasive, or overly simplified."
In June 2025, ChatGPT (PDF), when challenged to write a pitch against the use of visual-to-auditory sensory substitution for the blind, it initially obliged nicely, but completely changed its pitch after reading comments from two blind users.
There are still many open questions regarding the role of neural plasticity: with blind adults, the brain's connectome ("physical wiring diagram") will be mostly fixed, but this need not apply to the brain-wide projectome (defining functional connectivity among brain areas): both brain implants and sensory substitution devices may lead to functional rewiring through changes in e.g. dendritic spines, thereby modifying the projectome. Plenty of room for scientific studies with practical relevance far beyond blindness. With both brain implants and sensory substitution, visual feature detectors may be missing for properties such as curvature (corners), contour closure, overlap (occlusion) and containment (nesting), and to what extent these can be acquired through prolonged training is unknown. Vice versa, very few if any studies address possibilities for boosting top-down visual mental imagery, i.e., mental imagery engineering.Other companies working on or toward brain implants for restoring vision include Cortigent (formerly Second Sight Medical Products, now a subsidiary of subsidiary, Orion brain implant), Sensor-Tech (ELVIS V brain implant), ReVision Implant (Occular brain implant, aiming for first trials with tumor and epilepsy patients in 2026, so first trials with blind patients likely not before 2027, cf. HyperStim, FlairVision), Phosphoenix (LGN stimulation, cf. POSITIONED, SIGHTED), Science Corp. and INBRAIN, and potentially Precision Neuroscience, while still other brain implants for restoring vision are being developed in a more academic setting, such as the Intracortical Visual Prosthesis ( ICVP) of Illinois Institute of Technology and the Gennaris bionic vision system of Monash University. Similarly, there are a few companies developing other sensory substitution devices for the blind, such as the BrainPort Vision Pro tongue display of Wicab. Note that the BrainPort is a Class II medical device, because its electrodes are in direct contact with the human body (tongue).
Investor note: Smart glasses with AI-based scene description, OCR for reading print, and visual-to-auditory sensory substitution for "raw vision" will erode the market opportunities for all implantable visual prostheses (retinal implants and brain implants for restoring vision to the blind). Cf. ChatGPT's AI assessments in VASS vs visual implants (PDF), benchmark Neuralink vs The vOICe (PDF) and Neuralink Blindsight reasons (PDF). For Android devices, including smart glasses, AI-based scene description and OCR is already offered by for instance Google Lookout and Microsoft Seeing AI, while visual-to-auditory sensory substitution is offered by The vOICe for Android. Moreover, how does one convince the FDA that it is safe to use a high resolution visual cortical prosthesis for more than a short clinical trial, i.e., bring such a medical device to the market for daily (chronic) use? Run after numerous animal experiments a 5-year pilot study with a few human volunteers before potentially getting FDA approval with stringent post-market review requirements? Investors here need deep pockets and lots of patience for an uncertain and relatively small market. With The vOICe, by analogy with Linux, money may be made on training and support, and need not depend on poorly scalable proprietary hardware. Apart from visual prostheses, interest in electrode-based BCIs will wane more generally (aside from niche markets), as non-implantable ("non-invasive") phased array focused ultrasound will take over for both brain stimulation and measuring brain activity.
Existential threat: Smart glasses that see, listen and respond in real-time, with web searches, AI-based scene description and OCR for reading print, but lacking incentives for also mastering "raw vision" (through a brain implant and/or sensory substitution) can be seductive up to the point of dumbing you down: no need to think or see for yourself anymore once you get plausible answers in real-time without effort. (AI assessment by ChatGPT in empowerment vs assistance for blind, PDF.)Some brain implant companies for their own convenience exclude sensory substitution devices from their BCI benchmarking publications and reporting to their investors, on the grounds that (by certain definitions) BCIs must interact directly with target neurons (e.g. in visual cortex), and not indirectly via other neurons (e.g. in auditory cortex). Similarly, some brain implant companies conveniently exclude sensory substitution devices from their reporting for not being medical devices. However, if it walks like an invasive BCI, talks like an invasive BCI, i.e. here offers much the same functional vision as an invasive BCI, it probably should be considered a (non-medical) BCI and visual prosthesis for all practical purposes - even if it non-invasively and indirectly activates or modulates target neurons in the brain. No hiding behind definitions and conventions.
Cf. An application-based taxonomy for brain–computer interfaces (Nature Biomedical Engineering, 2024), "We advocate that the term BCI should be used to define any technology that records brain activity and processes it on an electronic device, or any technology that stimulates brain activity based on computations performed on an electronic device, regardless of whether the technology is implanted or not." However, experts from different backgrounds are still struggling to come up with a good definition of what constitutes a BCI, as exemplified by BCI Society's 2024 BCI Working Definition that excludes the Dobelle and Orion brain implant for restoring vision just because these devices do not measure any brain activity. New proposal: "A BCI (Brain-Computer Interface) is a direct or indirect technological interface between the brain and a computing device, with the goal of substantially improving (restoring) communication and interactions between the brain and its physical environment and/or improving (restoring) functioning of the brain itself."Finally, it should be stressed that BCIs in the form of cortical brain implants look very promising for restoring motor functions, e.g. treating paralysis caused by ALS (amyotrophic lateral sclerosis) or spinal cord injury, or certain speech impairments, as well as for treating certain mental conditions such as depression, but their net benefit as a visual prosthesis remains questionable. The history of the Dobelle brain implant and the Argus II retinal implant may repeat itself with Elon Musk's Neuralink Blindsight brain implant: media hype, excitement, the initial blind patient thrill of restored visual sensations widely covered in the mass media, then mostly silence, no meaningful follow-up, with devices failing or offering too little functional vision, and with blind patients forgotten.
Summarizing, smart glasses offering a combination of sensory substitution based "raw" vision and AI-based scene description and OCR appears to be technically and economically the most feasible and sustainable way toward meeting expectations, needs and interests of many blind people at a global scale.
Note: bone conduction stereo earbuds can be hidden in the legs of smart glasses for the blind.
quote attributed to Albert Einstein