Advertisement

World first: Listen to the brain’s thoughts translated into speech

Columbia University researchers have for the first time translated thoughts into recognisable words

Columbia University researchers have for the first time translated thoughts into recognisable words Illustration: Getty

The search for a machine that can read your thoughts has taken a significant step forward: US scientists have for the first time translated thoughts (brain waves) into intelligible, recognisable speech.

The new technology has great potential in giving a voice to people who are paralysed or severely confined by conditions such as locked-in syndrome – and the researchers hope to eventually develop a brain implant that would facilitate such a miracle.

The research also opens the way for computers and the human brain to communicate directly with one another – with tremendous potential for good and evil is too easily imagined.

Epilepsy patients as test subjects

The Columbia University neuro-engineers used a combination of AI deep learning and a voice synthesis vocoder to translate brain activity prompted by words that test subjects had heard – rather than spoken – into sound.

They did this because their test subjects were five epilepsy patients who were already undergoing testing to determine in which part of the brain their seizures were originating – a study achieved by the placement of electrodes into the temporal lobe of the brain using a procedure called invasive electrocorticography.

The procedure provides brain signals that have an exceptionally high signal-to-noise ratio. The temporal lobe also happens to be where auditory processing occurs – and so the clinical testing, which often lasts for a week or so, provides researchers an opportunity to conduct experiments that would probably be unethical in healthy patients.

Counting on a breakthrough

In the Columbia experiments, the words spoken into the ears of the test subjects were actually 10 numbers: zero to nine. Each number produced a unique response on the brain.

The sound produced by the vocoder in response to those brain signals was analysed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.

The robotic translations were successfully understood by 75 per cent of people enlisted to test the technology.

Dr Nima Mesgarani, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute, said the researchers plan to move on to testing words and complete sentences – and to run the same tests on brain signals emitted when a person speaks or imagines speaking.

This is where the system could evolve into an implant, “similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words,” Dr Mesgarani said.

“In this scenario, if the wearer thinks ‘I need a glass of water’, our system could take the brain signals generated by that thought, and turn them into synthesised, verbal speech.

Dr Nima Mesgarani. Photo: Columbia University

“This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

These findings were published this week in Scientific Reports.

Despite the excitement in global news reporting of the research, there is a way to go before a practical device will come on the market.

To access the thoughts that go with speaking, the researchers will have to probe a different part of the brain known as the Broca area – found in the frontal lobe and linked to speech production and language processing.

Deep learning technology the key

Dr Peter Stratton is a research fellow at the University of Queensland Brain Institute. His research interests include AI, deep leaning and deep brain stimulation for the control of motor symptoms in conditions such as Parkinson’s disease.

He agreed the Columbia translations of thought into sound were a big improvement on previous research that tended to focus on simple computer models that analysed spectrograms, which are visual representations of sound frequencies – and which failed to produce anything resembling intelligible speech.

Even so Dr Stratton said he struggled at times to make out some of the new translations.

However, he said the experiments were “an interesting new application of deep learning. When you read the results, they’re not particularly surprising. Deep learning has done better on many tasks and this is another one where it outperforms existing methods of machine learning”.

He agreed, too, that the potential of the technology “is life changing”.

Professor Neil Levy,  philosopher of bioethics, said the new technology could eventually be used for interrogation. Photo: Oxford University

Professor Neil Levy, is a philosopher of bioethics who splits his time between Macquarie University and the Uehiro Centre for Practical Ethics at the University of Oxford.

In an email he told The New Daily there have “worries… that technology like this could be used to spy on us.”

At the moment, we need a lot of cooperation from participants to decode anything at all, he said but “given the likely neural commonalities across people, the day may come when you could use this kind of device to decode covert thought.”

To use the technology for interrogation Dr Levy said “you would need something less invasive, like EEG (electrodes on the scalp) and big advances in the technology’s power to decode. It could happen. We can already decode visual representation.”

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.