AI technology transforms brain waves into real-time speech for paralyzed patients
On April 2, researchers based in California unveiled an artificial intelligence-driven system enabling people with paralysis to communicate verbally in real-time using their natural speech. This advancement in brain-computer interface (BCI) studies was pioneered by experts from the University of California, Berkeley, and the University of California, San Francisco.
This system employs neural interfaces to gauge brain activity along with AI algorithms to recreate speech patterns. In contrast to earlier systems, this one facilitates nearly instantaneous speech synthesis, achieving an unprecedented degree of fluidity and naturalness in neuroprosthetics. "The streaming method we've developed represents significant progress," stated Gopala Anumanchipalli, who spearheaded the research.
The device works with various brain-sensing interfaces, including high-density electrodes and microelectrodes, or non-invasive sensors that measure muscle activity. It samples neural data from the motor cortex, which controls speech production, and AI decodes this data into audible speech within a second.
This advancement greatly enhances the quality of life for individuals suffering from disorders such as ALS or extreme paralysis, giving them an effective means to interact more intuitively. Although the tech is still progressing, it has the potential to revolutionize communication for people who have difficulty speaking.
Comments
Post a Comment