NASA develops 'mind-reading' system
16:50 18 March 04
NewScientist.com news service
A computer program which can read words before they are spoken by analysing nerve signals in our mouths and throats, has been developed by NASA.
Preliminary results show the button-sized sensors, which attach under the chin and on either side of the Adam's apple and pick up nerve signals from the tongue, throat, and vocal cords, can indeed be used to read minds.
"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research.
The sensors have already been used to do simple web searches and may one day help space-walking astronauts and people who cannot talk communicate. The sensors could send commands to rovers on other planets, help injured astronauts control machines, or aid the handicapped.
In everyday life, they could even be used to communicate on the sly - people could use them on crowded buses without being overheard, say the NASA scientists.
Web search
For the first test of the sensors, scientists trained the software program to recognise six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors thought the words to themselves and the software correctly picked up the signals 92 per cent of the time.
Then researchers put the letters of the alphabet into a matrix with each column and row labelled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the mind-reading program.
"This proved we could browse the web without touching a keyboard," says Jorgensen.
Noisy settings
Phil Green, a computer scientist focusing on speech and hearing at the University of Sheffield, UK, called the research "interesting and novel" on hearing the news. "If you're not actually speaking but just thinking about speaking then at least some of the messages still get sent from the brain to the vocal tract," he says.
But he cautions the preliminary tests may have been successful because of the short lengths of the words and suggests the test be repeated on many different people to test the sensors work on everyone.
The initial success "doesn't mean it will scale up", he told New Scientist. "Small-vocabulary, isolated word recognition is a quite different problem than conversational speech, not just in scale but in kind."
He says conventional voice-recognition technology is more powerful than the apparent results of these sensors, and that "the obvious thing is to couple this with acoustics" to enhance communication in noisy settings.
The NASA team is now working on sensors that will detect signals through clothing.
http://www.newscientist.com/news/news.jsp?id=ns99994795
Informant: Ken DeBusk
NewScientist.com news service
A computer program which can read words before they are spoken by analysing nerve signals in our mouths and throats, has been developed by NASA.
Preliminary results show the button-sized sensors, which attach under the chin and on either side of the Adam's apple and pick up nerve signals from the tongue, throat, and vocal cords, can indeed be used to read minds.
"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research.
The sensors have already been used to do simple web searches and may one day help space-walking astronauts and people who cannot talk communicate. The sensors could send commands to rovers on other planets, help injured astronauts control machines, or aid the handicapped.
In everyday life, they could even be used to communicate on the sly - people could use them on crowded buses without being overheard, say the NASA scientists.
Web search
For the first test of the sensors, scientists trained the software program to recognise six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors thought the words to themselves and the software correctly picked up the signals 92 per cent of the time.
Then researchers put the letters of the alphabet into a matrix with each column and row labelled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the mind-reading program.
"This proved we could browse the web without touching a keyboard," says Jorgensen.
Noisy settings
Phil Green, a computer scientist focusing on speech and hearing at the University of Sheffield, UK, called the research "interesting and novel" on hearing the news. "If you're not actually speaking but just thinking about speaking then at least some of the messages still get sent from the brain to the vocal tract," he says.
But he cautions the preliminary tests may have been successful because of the short lengths of the words and suggests the test be repeated on many different people to test the sensors work on everyone.
The initial success "doesn't mean it will scale up", he told New Scientist. "Small-vocabulary, isolated word recognition is a quite different problem than conversational speech, not just in scale but in kind."
He says conventional voice-recognition technology is more powerful than the apparent results of these sensors, and that "the obvious thing is to couple this with acoustics" to enhance communication in noisy settings.
The NASA team is now working on sensors that will detect signals through clothing.
http://www.newscientist.com/news/news.jsp?id=ns99994795
Informant: Ken DeBusk
Starmail - 21. Mär, 22:15