From Thought to Sound in 10ms

Stephen Hawking typed at one word per minute, now a paralyzed man can speak near-instantly, straight from his brain. Are we finally digitizing human voice itself?

We just took a major leap toward restoring speech through thought. At UC Davis, a team led by Maitreyee Wairagkar developed a neural prosthesis that translates brain signals, not into text, but into actual speech.

With just 256 implanted electrodes and a real-time AI decoder, a patient with ALS known as T15 is now able to vocalize sounds, including pitch, interjections, and even singing, with only 10 milliseconds of latency.

Unlike earlier BCIs that relied on slow, text-based systems, this prosthesis focuses on sound production, enabling expression beyond dictionaries.

The system captured T15’s neural activity at the level of individual neurons, decoded it into phonemes and vocal cues, then passed it through a vocoder tuned to his original voice. While it’s not yet ready for daily conversation, the open transcription test saw 43.75% word error—it’s a staggering improvement from his 96.43% baseline.

  • Translates brain signals directly into speech
  • Operates at 10ms latency—nearly real-time
  • Achieved 100% accuracy in closed tests

We’re witnessing the early stages of a digital vocal tract. When voice becomes code and thought becomes sound, who controls your ability to speak? If your brain had a “Send” button, what would you say first?

Read the full article on Nature.

----

💡 If you enjoyed this content, be sure to download my new app for a unique experience beyond your traditional newsletter.

This is one of many short posts I share daily on my app, and you can have real-time insights, recommendations and conversations with my digital twin via text, audio or video in 28 languages! Go to my PWA at app.thedigitalspeaker.com and sign up to take our connection to the next level! 🚀

If you are interested in hiring me as your futurist and innovation speaker, feel free to complete the below form.