Researchers from Korea University and TU Berlin have created a brain reading interface used to control an exoskeleton.
They strapped a test patient to an electroencephalogram (EEG), which reads the brain, and had him look at different LED flickers. Each of the different flickers induce the brain to react in certain ways, and the EEG translate these neural activities into frequencies that the exoskeletons can respond to.
Simply put, the LED flickers don’t have to necessarily mean that our brain actually wants an exoskeleton to move left or right. Rather, it’s more like an arbitrary variable that’s given a value. The EEG processes the distinct brain activity from the subject seeing the different flickers and the exoskeleton computer module responds according to the flickers.
That’s not to say the research isn’t phenomenal. What the team managed to create was actually quite extraordinary. To be able to isolate the distinctive brain signals from other brain activities plus the artificial noises generated by the exoskeleton is an amazing feat.
“Exoskeletons create lots of electrical noise,” says Klaus Muller, a researcher on the project. “The EEG signal gets buried under all this noise—but our system is able to separate not only the EEG signal, but the frequency of the flickering LED within this signal.”
People with various illnesses such as amyotrophic lateral sclerosis (ALS) can regain some independence through this technology because they’ll be able to ‘communicate’ with their assistance mechanisms without verbal or physical gestures.
According to the team, the test subject was able to learn how to use the system within just 5 minutes, making this an extremely viable solution since it does not take prolonged periods of training to master it—albeit, the researchers’ system only has five LED flickers to interpret.