← Back to Library

BioByte 130: Understanding Inner Speech for Neuroprostheses, Phylogenetic Relationships Hidden within Evo 2, and Nanosyringes to Deliver Diverse Biomolecules

Welcome to Decoding Bio’s BioByte: each week our writing collective highlight notable news—from the latest scientific papers to the latest funding rounds—and everything in between. All in one place.

What we read

Blogs

Study of promising speech-enabling interface offers hope for restoring communication [Bruce Goldman, Stanford Medicine, August 2025]

A new publication from the Neural Prosthetics Translational Lab at Stanford Medicine explores a key question: is inner speech represented in the motor cortex, and can it be decoded quickly and safely enough for speech-restoring neuroprostheses? Although previous work from the lab has demonstrated that brain-computer interfaces (BCIs) can reliably extract and translate signals from paralyzed individuals attempting to speak or write, whether inner speech - i.e. imagining the sounds or feelings of speaking - can be similarly used is an open area of inquiry. Unlocking inner speech as a modality could enable more natural, fluent communication, while reducing the fatigue and articulatory effort often required of users.

To investigate this, Kunz et al. recorded data from microelectrode arrays implanted in four individuals with severe paralysis. They reveal that inner speech produces a robust, speech-like neural geometry in the ventral and mid-precentral cortex. Notably, inner speech is not a degraded, noisy shadow of attempted speech, but instead forms a scaled, separable representation - the authors identify a ‘motor-intent’ neural dimension distinguishing inner from attempted speech, providing a distinct substrate for decoding.

The decoding pipeline the authors deploy is straightforward: neural features are fed into a recurrent neural network that generates phoneme (atomic building blocks of speech) probabilities every ~80ms, these logits are then converted into words and sentences using a language model, and a text-to-speech engine renders audio. In closed-loop tests involving explicit cues, the system decoded imagined sentences in real-time using a 125,000 word vocabulary and achieved encouraging accuracies - ranging from 46 - 74%.

The paper concludes by addressing a crucial ethical issue: is freeform, uninstructed inner speech decodable? To probe this, participants were prompted with open-ended tasks such as sequence recall, counting, and word associations. The corresponding inner speech signals were, in principle, interpretable - though predictive accuracy varied significantly across participants and tasks. Critically, the team also demonstrated two separate, high-accuracy safety mechanisms to prevent inadvertent translation of inner speech. The first control involves labeling attempted speech patterns with their appropriate phonemes and labeling inner speech patterns with a ‘silence’ token, effectively suppressing the latter

...
Read full article on →