MI-BMInet: An Efficient Convolutional Neural Network for Motor Imagery Brain--Machine Interfaces with EEG Channel Selection

28 Mar 2022  ·  Xiaying Wang, Michael Hersche, Michele Magno, Luca Benini ·

A brain--machine interface (BMI) based on motor imagery (MI) enables the control of devices using brain signals while the subject imagines performing a movement. It plays a vital role in prosthesis control and motor rehabilitation. To improve user comfort, preserve data privacy, and reduce the system's latency, a new trend in wearable BMIs is to execute algorithms on low-power microcontroller units (MCUs) embedded on edge devices to process the electroencephalographic (EEG) data in real-time close to the sensors. However, most of the classification models present in the literature are too resource-demanding, making them unfit for low-power MCUs. This paper proposes an efficient convolutional neural network (CNN) for EEG-based MI classification that achieves comparable accuracy while being orders of magnitude less resource-demanding and significantly more energy-efficient than state-of-the-art (SoA) models for a long-lifetime battery operation. To further reduce the model complexity, we propose an automatic channel selection method based on spatial filters and quantize both weights and activations to 8-bit precision with negligible accuracy loss. Finally, we implement and evaluate the proposed models on leading-edge parallel ultra-low-power (PULP) MCUs. The final 2-class solution consumes as little as 30 uJ/inference with a runtime of 2.95 ms/inference and an accuracy of 82.51% while using 6.4x fewer EEG channels, becoming the new SoA for embedded MI-BMI and defining a new Pareto frontier in the three-way trade-off among accuracy, resource cost, and power usage.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here