From Thought to Sound in 10ms

Stephen Hawking typed at one word per minute, now a paralyzed man can speak near-instantly, straight from his brain. Are we finally digitizing human voice itself?
We just took a major leap toward restoring speech through thought. At UC Davis, a team led by Maitreyee Wairagkar developed a neural prosthesis that translates brain signals, not into text, but into actual speech.
With just 256 implanted electrodes and a real-time AI decoder, a patient with ALS known as T15 is now able to vocalize sounds, including pitch, interjections, and even singing, with only 10 milliseconds of latency.
Unlike earlier BCIs that relied on slow, text-based systems, this prosthesis focuses on sound production, enabling expression beyond dictionaries.
The system captured T15’s neural activity at the level of individual neurons, decoded it into phonemes and vocal cues, then passed it through a vocoder tuned to his original voice. While it’s not yet ready for daily conversation, the open transcription test saw 43.75% word error—it’s a staggering improvement from his 96.43% baseline.
- Translates brain signals directly into speech
- Operates at 10ms latency—nearly real-time
- Achieved 100% accuracy in closed tests
We’re witnessing the early stages of a digital vocal tract. When voice becomes code and thought becomes sound, who controls your ability to speak? If your brain had a “Send” button, what would you say first?
Read the full article on Nature.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
