Machine Learning Using Brain Computer Interface System
Brain-computer interface (BCI) controllers are a trending topic in the system design that enables control of devices to enhance or restore the user’s capabilities of movement or functionality. With commercially available hardware and supporting software, different electrical potential brain waves are measured via a headset with a collection of electrodes. Out of the different types of brain signals, the proposed BCI controller utilizes non-task related signals, i.e. squeezing left/right hand or tapping left/right foot, due to their responsive behavior and general signal feature similarity among patients. In addition, motor imagery related signals, such as imagining left/right foot or hand movement are also examined. The main goal of this paper is to demonstrate the performance of machine learning algorithms based on classification accuracy. The performances are evaluated on BCI dataset of three male subjects to extract the most significant features before introducing them to machine learning algorithms.
Current effort is focused on the recording, processing, and analysis of three male test subject’s raw brain signals using a modified commercial Emotiv headset and OpenViBE Designer and Acquisition Server software (v.1.2.2). Each subject undergoes a 30-minute session composed of four experiments: two non-task related signals and two motor imagery signals. Each experiment records fifteen trials of two classes (i.e. left/right hand movement). The raw data is then pre-processed using a MatLab plugin, EEGLAB, where standard processes of cleaning and epoching the signals is performed. Individual trials are visually inspected to reject any malfunctioning electrode data before being compiled to a single file containing at least 80 acceptable trials for each class, for each experiment and for each subject. An independent component analysis (ICA) is conducted to remove expected artifacts like blinking so a time-frequency comparison analysis is used to compare the power and signal features of each subject. Ultimately, these collected trials are used to train an efficient classifier which can deploy an input command to a robotic device (i.e. a prosthetic arm or electrically powered wheelchair). The goal is for a subject to initiate a command using brainwaves and for the robot to perform the action in real time. After optimizing the initial preprocessing, the machine learning algorithm test performance is achieved and presented in this research. The paper discusses machine learning for robotic application and how BCI Interfacing is accomplished using feature selection. We also describe the common flaws when validating machine learning methods in the context of BCI to provide a brief overview on biologically (using brain waves) controlled devices.
Machine Learning Using Brain Computer Interface System
Category
Technical Paper Publication
Description
Session: 05-12-01 Sensors and Actuators, Machine Learning, & Robotics, Rehabilitation
ASME Paper Number: IMECE2020-23394
Session Start Time: November 19, 2020, 03:40 PM
Presenting Author: Kevin Matsuno
Presenting Author Bio: Mr. Kevin Matsuno is a Mechanical Engineering graduate research student from California State University Northridge. He is currently working on his MS Degree with an emphasis on controls with bioengineering application. He is also working fulltime in the aerospace industry as a design engineer for Aerojet Rocketdyne.
Authors: Kevin Matsuno California State University
Vidya Nandikolla California State University Northridge