Brain Interface Technology Enables Contact to the Paralysed

Category Machine Learning

tldr #

Researchers from UC Berkeley and UCSF have made a breakthough in brain-computer interface technology, enabling a person with paralysis to speak through a digital avatar. The system takes electrical signals from the brain and produces both speech and facial expressions. With the help of A.I. algorithms, the system is accurate and three times faster than commercially available technology.


content #

Researchers at UC San Francisco and UC Berkeley have developed a brain-computer interface (BCI) that has enabled a woman with severe paralysis from a brainstem stroke to speak through a digital avatar. It is the first time that either speech or facial expressions have been synthesized from brain signals. The system can also decode these signals into text at nearly 80 words per minute, a vast improvement over commercially available technology .

This is the first brain-computer interface developed that can decode signals into both speech and facial expressions

Edward Chang, MD, chair of neurological surgery at UCSF, who has worked on the technology, known as a brain computer interface, or BCI, for more than a decade, hopes this latest research breakthrough, appearing Aug. 23, 2023, in Nature, will lead to an FDA-approved system that enables speech from brain signals in the near future. "Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others," said Chang, who is a member of the UCSF Weill Institute for Neuroscience and the Jeanne Robertson Distinguished Professor in Psychiatry .

UCSF and UC Berkeley researchers developed the algorithm that works with 253 wires placed on the surface of the brain

"These advancements bring us much closer to making this a real solution for patients." Chang's team previously demonstrated it was possible to decode brain signals into text in a man who had also experienced a brainstem stroke many years earlier. The current study demonstrates something more ambitious: decoding brain signals into the richness of speech, along with the movements that animate a person's face during conversation .

Only 39 phonemes were trained to be recognised

Chang implanted a paper-thin rectangle of 253 electrodes onto the surface of the woman's brain over areas his team has discovered are critical for speech. The electrodes intercepted the brain signals that, if not for the stroke, would have gone to muscles in her, tongue, jaw and larynx, as well as her face. A cable, plugged into a port fixed to her head, connected the electrodes to a bank of computers .

The AI model used to synthesize speech was personalized to sound like the participant by encoding vocal characteristics

For weeks, the participant worked with the team to train the system's artificial intelligence algorithms to recognize her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again, until the computer recognized the brain activity patterns associated with the sounds. Rather than train the AI to recognize whole words, the researchers created a system that decodes words from phonemes .

The resolution of the system makes it much faster than commercially available technology

These are the sub-units of speech that form spoken words in the same way that letters form written words. "Hello," for example, contains four phonemes: "HH," "AH," "L" and "OW." Using this approach, the computer only needed to learn 39 phonemes to decipher any word in English. This both enhanced the system's accuracy and made it three times faster. "The accuracy, speed and vocabulary are crucial," said Sean Metzger, who developed the text decoder with Alex Silva, both graduate students in the joint Bioengineering Program at UC Berkeley and UCSF .

The new algorithm has a text decoding system of 80 words per minute

"It's what gives a user the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations." To create the voice, the team devised an algorithm for synthesizing speech, which they personalized to sound liks the participant by encoding her own vocal characteristics into the model.


hashtags #
worddensity #

Share