Using brain activity for ambiguity reduction of touch-based gestures in a 3D CAD environment
MetadataShow full item record
The aim of this thesis is to explore how multimodal Computer Aided Design (CAD) interfaces can be developed by integrating brain-computer interfaces (BCI) and touch-based systems. This integration is proposed in order to help reduce the ambiguity associated with 2-dimensional (2D) touch-based gestures on 3-dimensional (3D) touch-based applications. Gestures on a touch-based system are subject to ambiguity in recognition since they are 2-dimensional in nature and could lead to multiple interpretations, more specifically when used to operate a 3-dimensional application. Because of this issue, this thesis describes methods to utilize brain signals of users to help resolve this problem. The analysis that was conducted was based on a CAD modeling interface where users could perform various object manipulation operations on a given 3D model. Various experiments were conducted that involved translation and rotation-based transformations of the model, using touch-based gestures. As the subjects performed the experiments, the gestures that they used were captured, and a neuro-headset was simultaneously used to collect brain signals from 9 different locations on the users' scalps. These brain signals, also known as electroencephalogram (EEG) signals, were then analyzed in the time-frequency domain to detect the desynchronizations of certain frequency bands (Theta: 3-7 Hz, Alpha: 8-13 Hz, Lower Beta: 14-20Hz, Upper Beta: 21-29Hz and Gamma: 30-40Hz) as an indication of motor imagery. The main principles upon which the analysis of the EEG signals was performed, were based on phenomena known as "Event Related Synchronization", (ERS) and "Event Related Desynchronization" (ERD) which occur as a result of neurons in the brain either firing in a synchronized or unsynchronized manner with each other, depending on the mental state of the user. Features were extracted from the gesture-based data as well as the EEG data, and were used to separately classify the various tasks that were performed based on gesture-usage and EEG signals. These features were then merged into a single feature vector which was again used to classify the same tasks. The results that were obtained indicated that a multi-modal interface of this nature would indeed be a possibility to develop.