Commanding a robot through action grammar using non-invasive brain computer interface
MetadataShow full item record
The advent of brain computer interface (BCI) technology is leading to methodologies to interface robots with human brain. This work explores the possibility of using a low-cost non-invasive BCI using electroencephalogram (EEG) to control robots based on the intentions of a person. The conventional methods use classifiers to identify intentions from the EEG collected from a person and then discretely control robot's each action which constitutes a complex task. This required a large set of intentions to be identified and a very accurate classifier. In addition, controlling each action of the robot is a tedious process if a complex motion is desired. In this work, the fundamental actions called "actemes" will be combined by the robot itself to perform the complex task. The combination is based on a set of rules called "grammar". Grammar is a set of allowable and unallowable combinations of motions. The automatic construction of the task construction also requires that the robot is aware of its workspace to avoid collision with objects. This involves knowing its own internal states (proprioception) as well as the objects and other features of the working environment (spatial cognition). The ability to construct the task is called task awareness and the latter is called spatial awareness. This mimics the local intelligence of a human hand for the robot to act as an extended arm of our body. The decision to perform an acteme based on the grammar is equivalent to finding a control policy at a state of the system (task) in Markov decision process (MDP). Thus the action grammar is modelled using stationary MDP. Proprioception is carried out by estimating the states (joint and workspace kinematics) of the robot using unscented Kalman filter/particle filter. The objects in the workspace are identified using computer vision techniques. Since an articulated robot can take amorphous shapes in the workspace, the objects are represented with respect to the robot in a self-map. The self-map is a normalized map that represents various metrics of the objects with respect to the robot having fixed shape. The self-map converts the sensory information about the objects in proximity to the reward function of a non-stationary MDP. The task and spatially aware robot is easy to control using BCI. The effort of the user has been reduced to the intentions of initiating a task and to proceed at any stage of the task. A neural network classifier is trained using the data collected from the subject while watching the robot performing fundamental actions. The features used are 36 Hjorth and 1440 autoregression parameters. The classifier is used to classify the intentions from the EEG. Two experiments were conducted, one is to command the robot to perform a screw insertion task and the second is a door opening problem in which the robot is required to perform actions that will lead to opening a door based on the user's intentions. The subject was able to command the robot and accomplish the task more easily compared to the conventional methods.