A $4 million award from the Army Research Office has allowed UC Irvine, Carnegie Mellon University and University of Maryland scientists to begin research on imagined speech that could lead to technology that converts brain waves into text.
The research, which is being pursued with the intent of aiding silent communication among soldiers, may also have an application in the commercial sector, with a device that could help the mute communicate.
“We’d like to evolve the ‘push to talk’ button on walkie-talkies into ‘push to think,’ ” said Professor Mike D’Zmura, the project’s principal investigator.
Although the scientists on the project have settled on neuroscientific and signal-processing approaches to the research, the development of a device that is portable and does not require lengthy setup or a laboratory environment to function is likely to be 10 to 15 years away. However, according to D’Zmura, in three or so years it should be possible “to develop software that would let someone communicate using a small working vocabulary — a limited number of words — using [electroencephalographic] brain waves recorded during imagined speech.”
The researchers use a high-density electroencephalography (EEG) net made up of 128 saltwater-soaked sponges. The net’s futuristic appearance has lead to mischaracterization of the project by some media.
Although some call this a mind-reading helmet, D’Zumra wants to clarify that this is not so.
“It is not a helmet and there’s no way it can operate without the complete cooperation of the person wearing it,” D’Zmura said. “A person who wants to communicate this way would also have to train the software to recognize the individual characteristics of his or her own brain waves.”
In recent experiments, the participant wearing the EEG net reads a sentence projected on a computer monitor or hears the sentence over a loudspeaker and then repeats the sentence to himself or herself. The electrical waves produced by the brain during that process are amplified and recorded. The difficulty lies in interpreting the EEG brain waves.
To D’Zmura and his colleagues, the key is the phonemes and their articulation. D’Zmura explained that the average college student knows between 10,000 and 80,000 words made up of around 9,000 syllables, too many words and syllables to be used as basic elements in imagined speech recognition. However, D’Zmura emphasized the preciseness of the process.
“There are 40 phonemes that make up these syllables, and the approach is to distinguish a relatively small number of elements in brain waves,” D’Zmura said.
D’Zmura feels that we will eventually be able to decode brain waves into text in real time. D’Zmura stated that he is not worried about processing power, considering the existence of impressive modern computers.
“We will have at least 10 to 15 times the computing power we have today in 10 to 15 years,” D’Zmura said.
Instead D’Zmura stated that researchers are more concerned with providing scientific grounding for brain-wave analysis. This is why there is a multi-disciplinary team of scientists working on the project that include collegiate faculty with backgrounds in psychology, linguistics, cognitive neuroscience, biomedical engineering, electrical engineering and computer science.
While the main reason for the research is military communication, D’Zmura is equally excited about the civilian uses for the project. As a science fiction fan, he does not see many limits on potential applications.
“This technology could be used to issue complex commands in video games using brain waves. It may also show up embedded into baseball caps and allow for silent conversations across a room,” D’Zmura said.
D’Zmura further added that it would probably end up being used for silent communication by students in lecture halls, eliminating even the need for texting.
The project, officially funded by the U.S. Department of Defense’s Multi-Disciplinary University Research Institute program, can be found online at cnslab.ss.uci.edu/muri/index.html.