“Mind-boggling! Science creates computer that can decode your thoughts and put them into words,” the Daily Mail’s headline exclaimed today, while The Daily Telegraph heralded an era in which a “mind-reading device could become a reality”.
You’d be forgiven for thinking famous mind readers such as Derren Brown had just produced a telepathy implant. Instead, these reports are from a small study of 15 people that culminated in researchers being able to reconstruct the sound patterns of words using brain activity alone.
This research involved attaching electrical sensors directly to brains of people undergoing brain surgery to understand how they processed individual words that were played to them. The researchers demonstrated that the brain breaks words down into complex patterns of electrical activity. They were then able to create a mathematical algorithm that decoded and translated the brain activity back into a rough version of the original sound.
But the reconstructed words were not of good enough quality to be recognised by a human listener when played. The words were only recognised when the original and reconstructed sound patterns were compared visually.
This exciting and new research does raise the prospect of brain activity one day being translated into words using an implant. Such technology could help the vast numbers of people suffering from problems affecting speech. But it is important to recognise that this research is in its very early stages and a clinically effective implant is likely to be a long way off.
Where did the story come from?
The study was carried out by a collaboration of North American universities led by researchers from the University of California, Berkeley. It was funded by several academic grants and was published in the peer-reviewed science journal Public Library of Science (PLoS) Biology.
The researchers report that the human brain has evolved complex mechanisms to decode highly variable sounds into meaningful elements of language, such as words. Understanding this complex decoding in humans has proved difficult, as it requires recording brain activity on the exposed brain (with the skull removed).
This study took advantage of cases of rare brain surgery for epilepsy and brain tumours that allowed researchers to measure brain activity by attaching sensors directly to the brain surface. This provided a unique opportunity to understand how the human brain recognises speech.
This study received wide media coverage due to its futuristic appeal and was often given a sci-fi angle, with some suggesting a “mind-reading device could become reality”. This research does raise the possibility of developing a device that could interpret thoughts into speech in the future. However, it is important to note the authors’ own caution – that the technology of translating thoughts into words needs to be vastly improved before such a device could become a reality.
What kind of research was this?
This was a small study of 15 people undergoing brain surgery for epilepsy or brain tumour. It looked at whether the complex brain activity involved in processing spoken words, such as the sound wave form and syllable rate, could be reconstructed using a computer program.
The researchers believe that the brain processes internal thoughts in a similar way to hearing sounds, and hope that this type of technology could eventually be used to help those who cannot talk, such as those in a coma or in the much-feared “locked-in syndrome”.
What did the research involve?
Fifteen patients undergoing brain surgery for epilepsy or brain tumour were asked to listen to 47 real or invented words and sentences from different English speakers. All patients had normal language capabilities when they were enrolled in the study.
During this process electrical signals from the brain were recorded using multiple sensors attached directly to the part of the brain called the lateral temporal cortex, which includes the superior temporal gyrus (STG), thought to be very important in the processing of speech.
To understand and mimic the brain activity involved in processing heard words, the researchers used an approach referred to as “stimulus reconstruction”. In this case, the stimulus was hearing a spoken word.
Hearing words causes a large amount of brain activity involved in recognising and processing the different aspects of the sounds of the words, for example the different sound frequencies and timing of syllables. The word reconstruction involved creating a mathematical program (like that used in computer software) capable of decoding the vast amount of brain activity in such a way that it was possible to identify the original words heard by the participant.
The reconstructed signals from different mathematical models (linear and non-linear) were compared to those detected directly from the brain surface to see how good they were at mimicking the brain’s activity when hearing spoken words. The researchers also used the models to identify the most important areas of the brain involved in processing this information and what other factors influenced the accuracy of the sound reconstructions.
What were the basic results?
When constructing the mathematical models they found that the STG region of the brain was important in creating an accurate prediction of the sound pattern of the original word.
The sound patterns generated by the mathematical model allowed the identification of specific words to be generated directly from the brain activity of patients listening to the words. These took the form of visual representations of the word sound pattern. A total of 47 words were presented in pairs and, on average, the model correctly identified the word in approximately nine out of every ten instances (89%). This was significantly better than 50% correct identification, which would be seen simply by guessing.
Importantly, however, the quality produced from reconstructing the words was not good enough for them to be recognised by a human listener when played. The words were only recognised when the original and reconstructed sound patterns were compared visually.
The researchers found that different types of mathematical models performed better at reconstructing the sounds of words with particular characteristics.
How did the researchers interpret the results?
The authors concluded that their results demonstrated that key aspects of speech signals can be reconstructed from STG activity.
This study of 15 people undergoing brain surgery has demonstrated a method of reconstructing the sound of a heard word using only the signals obtained from the brain. This study represents an important progression in the field of speech reconstruction, which has the potential to improve the lives of many who suffer from speech difficulties in the future.
But the words, when reconstructed, were not of good enough quality to be recognised by a human listener when played. The words could only be identified when the original and reconstructed sound patterns were compared visually. The researchers suggest that improving the brain sensors detecting the STG brain activity may, in the future, improve the reconstructed sound to a level that could be understood by a person listening.
The mathematical formula used to reconstruct the words is at a very early stage and would need a significant amount of improvement and development before it could be considered for use in an implant or similar device in the future. Similarly, future speech reconstruction research would need to demonstrate it was effective in a large range of words, sentence patterns and languages. Currently, the mathematical program has only been tested on a limited vocabulary of 47 English words.
This research represents an intriguing first demonstration of the potential of speech reconstruction technology to transform the lives of people with communication problems in the future.