Researchers are working on a technology that will allow people with speaking disability to regain their voice. This will be possible due to the technology that will harness brain activity to produce synthesized speech. The technology is built in a way that is supposed to bridge the gap using a new algorithm that turns messages meant for your muscles into legible sounds.
In the course of the research, scientists at the University of California, San Francisco, implanted electrodes into the brains of volunteers and decoded signals in cerebral speech centers to guide a computer-simulated version of their vocal tract – lips, jaw, tongue and larynx – to generate speech through a synthesizer. Read on to know more about the invention.
Technology That Turns Neural Activity Into Speech
The procedure of converting the complex mix of information sent from the brain to the range of body parts required to transform a puff of air into meaningful sound is clearly not a simple task. The researchers attempted by applying a method that reconstructed one-syllable words based directly on the brain’s perception of spoken sounds lifted from the human auditory path. As a way, synthetic speech produced this way could be understood three quarters of the time, which is not a bad result under any condition. The resulting synthetic speech is a not crystal clear but is certainly intelligible. If set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.
The process is all about realizing the importance to understand that this isn’t turning abstract thoughts into words. It’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those which words those movements would be forming. It’s brain reading, but it isn’t mind reading is what the basic idea is.
Research Procedure of the New Technology
The research procedure consisted of a number of patients brains being implanted with numerous electrodes. Several sensors were also glued to their tongue, teeth and lips in order to keep a track of their movements. This allowed the researchers to detect the path of neurological messages that had an effect of the patient’s speech system. The technology applied made the translation of muscle movements more clear and easier to interpret instead of a single step translation of the brain signals alone.
The researchers explained that they still have a long way to go to perfectly mimic spoken language. But, the levels of accuracy they produced would be an amazing improvement in real-time communication compared to what’s currently available.
Applying this kind of research to marketable technology will definitely take a lot more research. It also mentions the overcoming of practical and ethical hurdles in delivering neural implants. The good thing is that it’s just the start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk. But the advances clearly speak for themselves.
Subscribe to iTMunch for more updates in the world of IT, AI, HR, finance, marketing and more!