Japan Uses AI to Reproduce Voice from Brain Waves

Japan Uses AI to Reproduce Voice from Brain Waves

Latest Update August 3, 2023
  • Share :
  • 1,469 Reads   

The Tokyo Institute of Technology uses AI to reproduce voice from brain waves, supporting patients with intractable diseases such as hearing loss and amyotrophic lateral sclerosis (ALS).

6 May 2023 - The brain-computer interface (BCI), which connects the brain to a computer, is a hot topic of research.  In particular, there are high expectations for the application of non-invasive types that do not damage the body. Natsue Yoshimura, a professor at the Tokyo Institute of Technology’s School of Information Science and technology, uses artificial intelligence (AI) to decipher brain activity signals obtained from electroencephalograms and magnetic resonance imaging (MRI).  She is trying to elucidate functional declines such as hearing loss and amyotrophic lateral sclerosis (ALS). 

BCI, which grasps human thoughts from the outside and connects them to computer instructions, and “Brain-Machine Interface (BMI), which allows machines to operate as desired, were a dream until a while ago. However, in recent years, there has been progress in technology for reading signals of brain activity such as emotions, speech, and movement, and processing them with computer. 

In particular, the hand has been identified as the active part of the brain involved in control.  Therefore, the development of power-assisted robots that support movements by catching muscle signals from the wrist is progressing. Implanting electrodes in the human brain to operate devices is also becoming a reality, thanks to the start-up of American entrepreneur Elon Must.

Professor Yoshimura of Tokyo Institute of Technology and his colleagues are developing a non-invasive technology that reproduces human-imagined speech.  The electrodes attached to the scalp listen to “A” and “T” and catch brain wave signals when you think of them. About 80% of the time, when the sound source parameters were estimated by AI and the restored sound was heard, it could be properly recognized.  

Professor Yoshimura emphasizes, “I want to clarify which part of the brain processes the problem”. So that it can lead to The treatment of diseases.

The research focused on “vestibular electrical stimulation”, which is also used in hospital tests.  A small electrode is attached behind the ear, and a week electric current is passed through it, creating an effect similar to the distortion of balance in the vestibular apparatus of the brain.  Then, it is said that the body will tilt regardless of the person’s intention.

This was introduced and further classical conditioning known as “Pavlov’s dog”. It associates cognitive thoughts with unconditional bodily reactions.

Specifically, if “yes”, the head is trained to tilt to the right ear by electrical stimulation, and is “no”, the head is trained to tilt to the left ear.  After that, if the patient switched to the one without electrical stimulation, the tilt corresponding to the person’s “yes” or “no” was make to occur.

It was revealed that the accuracy of discriminating “yes” and “no” from brain waves actually measured without electrical stimulation was about 80%.  At this time, f (functional) MRI also confirmed that the sensory area of The brain were responding.  I confirmed that the sense of electrical stimulation was firmly input into the body.

ALS is an intractable disease in which muscles become immobile due to abnormalities in the nerves (motor Neurons) in the brain. In order to communicate with family members, we would like to be able to express “yes” Or “no” even if the symptoms progress.

 

#Medical #AI #Technology #Mreport #ข่าวอุตสาหกรรม


Source: Nikkan Kogyo Shimbun