Although first, this article sets the foundation of incoming research endeavors that we intend to encounter with the scientific world in order to prove the commercial potential of our technology

Sofia, BG/Copenhagen, DK: Today,  we are happy to announce that our scientific paper entitled “Development of a real-time motor-imagery-based EEG brain-machine interface” has been accepted at the 25th International Conference on Neural Information Processing. The research was performed in collaboration with the University of Southern Denmark and utilized hardware and software from Berger Neurorobotics.

Research Motivation

Since the seventies, there has been an increasing interest in the field of brain-computer interface technology and respectively a growing amount of research publications. However, most of the research has been conducted in isolated, controlled and noise-free lab environments. This allowed researchers to obtain better and more robust results but says little about performance in real-life everyday environments. Until recently this hasn’t been a problem since the brain-computer interface was only considered plausible inside research environments but recent technological advancements are starting to open possibilities for commercializing this technology.

The goal of our latest research was to prove that advanced brain-computer interfaces are plausible in everyday environments. In order to do that we conducted experiments at one of the canteens at the University of Southern Denmark which is a real, unpredictable, and noisy open location.

Experiments and Results

During the experiments, EEG data was collected at the canteen and was used to train a state-of-the-art machine learning classifier for motor imagery classification. The motor imagery problem is a traditional, but very challenging, brain-computer interface problem where a participant is asked to imagine left- or right-hand movement. Then a software system – usually one based on machine learning – has to detect the intention of the participant and control a virtual environment based on the participant’s thoughts.

After the data was collected and used to train the machine learning system, a second experiment was performed in which the participants had to control a virtual environment by thought while sitting in between the noisy surroundings of the canteen. The system was tested in real-time and the results were compared to experiments performed with research-grade equipment with wet electrodes in an isolated lab environment.

The results showed that our wireless system with dry electrodes performed as well as a research-grade system with wet electrodes and did so in an everyday noisy environment. The achieved results are really exciting as we managed to achieve very high performance and robustness, test our hardware and software in a realistic everyday setup and show that high-performance brain-computer interface technologies outside of research labs are already plausible.





Leave a Reply

Your email address will not be published. Required fields are marked *