Annu Int Conf IEEE Eng Med Biol Soc
July 2024
Decoding inner speech from the brain via the hybridisation of fMRI and EEG data is explored to investigate the performance benefits over unimodal models. Two different fusion approaches are examined: concatenation of probability vectors from unimodal fMRI and EEG machine learning models, and data fusion with feature engineering. Same-task inner speech data are recorded from four participants, and different processing strategies are compared and contrasted to previously-employed hybridisation efforts.
View Article and Find Full Text PDFHandwritten signatures in biometric authentication leverage unique individual characteristics for identification, offering high specificity through dynamic and static properties. However, this modality faces significant challenges from sophisticated forgery attempts, underscoring the need for enhanced security measures in common applications. To address forgery in signature-based biometric systems, integrating a forgery-resistant modality, namely, noninvasive electroencephalography (EEG), which captures unique brain activity patterns, can significantly enhance system robustness by leveraging multimodality's strengths.
View Article and Find Full Text PDFThe recognition of inner speech, which could give a 'voice' to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech.
View Article and Find Full Text PDF