Automatic Speech Activity Recognition from MEG Signals Using Seq2Seq Learning
dc.contributor.ORCID | 0000-0001-7265-217X (Wang, J) | |
dc.contributor.author | Dash, Debadatta | |
dc.contributor.author | Ferrari, P. | |
dc.contributor.author | Malik, S. | |
dc.contributor.author | Wang, Jun | |
dc.contributor.utdAuthor | Dash, Debadatta | |
dc.contributor.utdAuthor | Wang, Jun | |
dc.date.accessioned | 2020-03-10T19:57:38Z | |
dc.date.available | 2020-03-10T19:57:38Z | |
dc.date.issued | 2019-03 | |
dc.description | Due to copyright restrictions and/or publisher's policy full text access from Treasures at UT Dallas is limited to current UTD affiliates (use the provided Link to Article). | |
dc.description.abstract | Accurate interpretation of speech activity from brain signals is critical for understanding the relationship between neural patterns and speech production. Current research on speech activity recognition from the brain activity heavily relies on the region of interest (ROI) based functional connectivity analysis or source separation strategies to map the activity as a spatial localization of a brain function. Albeit effective, these methods require prior knowledge of the brain and expensive computational effort. In this study, we investigated automatic speech activity recognition from brain signals using machine learning. Neural signals of four subjects during four stages of a speech task (i.e., rest, perception, preparation, and production) were recorded using magnetoencephalography (MEG), which has an excellent temporal and spatial resolution. First, a deep neural network (DNN) was used to classify the four whole tasks from the MEG signals. Further, we trained a sequence to sequence (Seq2Seq) long short-term memory-recurrent neural network (LSTM-RNN) for continuous (sample by sample) prediction of the speech stages/tasks by leveraging its sequential pattern learning paradigm. Experimental results indicate the effectiveness of both DNN and LSTM-RNN for automatic speech activity recognition from MEG signals. © 2019 IEEE. | |
dc.description.department | Erik Jonsson School of Engineering and Computer Science | |
dc.description.department | Callier Center for Communication Disorders | |
dc.description.sponsorship | National Institutes of Health (NIH) under award number R03 DC013990 | |
dc.identifier.bibliographicCitation | Dash, D., P. Ferrari, S. Malik, and J. Wang. 2019. "Automatic Speech Activity Recognition from MEG Signals Using Seq2Seq Learning." International IEEE/EMBS Conference on Neural Engineering, 9th: 340-343, doi: 10.1109/NER.2019.8717186 | |
dc.identifier.issn | 9781538679210 | |
dc.identifier.uri | http://dx.doi.org/10.1109/NER.2019.8717186 | |
dc.identifier.uri | https://hdl.handle.net/10735.1/7384 | |
dc.identifier.volume | 2019 | |
dc.language.iso | en | |
dc.publisher | IEEE Computer Society | |
dc.relation.isPartOf | International IEEE/EMBS Conference on Neural Engineering, 9th | |
dc.rights | ©2019 IEEE | |
dc.subject | Brain | |
dc.subject | Brain mapping | |
dc.subject | Neural networks (Neurobiology) | |
dc.subject | Image segmentation | |
dc.subject | Long-term memory | |
dc.subject | Magnetoencephalography | |
dc.subject | Speech | |
dc.subject | Automatic speech recognition | |
dc.subject | Speech perception | |
dc.subject | Short-term memory | |
dc.title | Automatic Speech Activity Recognition from MEG Signals Using Seq2Seq Learning | |
dc.type.genre | article |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- JECS-4986-261009.08-LINK.pdf
- Size:
- 166.35 KB
- Format:
- Adobe Portable Document Format
- Description:
- Link to Article