Ben Coxworth writing for Gizmag (thanks to Geoff H for the tip):
Last September, scientists from the University of California, Berkeley announced that they had developed a method of visually reconstructing images from peoples’ minds, by analyzing their brain activity.
Much to the dismay of tinfoil hat-wearers everywhere, researchers from that same institution have now developed a somewhat similar system, that is able to reconstruct words that people have heard spoken to them. Instead of being used to violate our civil rights, however, the technology could instead allow the vocally-disabled to “speak.”
Epilepsy patients were enlisted for the study, who were already getting arrays of electrodes placed on the surface of their brains to identify the source of their seizures. The scientists used these electrodes to monitor the electrical activity in a region of their brains’ auditory system, known as the superior temporal gyrus (STG). From there, it was a matter of observing the specific activity patterns that occurred when the subjects heard certain words.
When the electrodes’ data was applied to a computational model, the computer was able to actually reproduce the sounds that had been heard – sort of. Although the noises made by the computer were somewhat garbled, they were close enough to the original words that the scientists were better able to identify those words than would be possible otherwise…
[continues at Gizmag]