Attend and Decode: 4D fMRI Task State Decoding Using Attention Models

RESOURCE

Abstract

Source code for Brain Attend and Decode paper. Functional magnetic resonance imaging (fMRI) is a neuroimaging modality that captures the blood oxygen level in a subject's brain while the subject either rests or performs a variety of functional tasks under different conditions. Given fMRI data, the problem of inferring the task, known as task state decoding, is challenging due to the high dimensionality (hundreds of million sampling points per datum) and complex spatio-temporal blood flow patterns inherent in the data. In this work, we propose to tackle the fMRI task state decoding problem by casting it as a 4D spatiotemporal classification problem. We present a novel architecture called Brain Attend and Decode (BAnD), that uses residual convolutional neural networks for spatial feature extraction and self-attention mechanisms for temporal modeling. We achieve significant performance gain compared to previous works on a 7-task benchmark from the large-scale Human Connectome Project-Young Adult (HCP-YA) dataset. We also investigate the transferability of BAnD's extracted features on unseen HCP tasks, either by freezing the spatial feature extraction layers and retraining the temporal model, or finetuning the entire model. The pre-trained features from BAnD are useful on similar tasks while finetuning them yields competitive results on unseen  More>>
Developers:
Ng, Brenda [1] Kaplan, Alan [1] Ray, Priyadip [1] Nguyen, Chanh [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Release Date:
2021-05-07
Project Type:
Open Source, Publicly Available Repository
Software Type:
Scientific
Version:
1.
Licenses:
MIT License
Sponsoring Org.:
Code ID:
68570
Site Accession Number:
LLNL-CODE-823806
Research Org.:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Country of Origin:
United States

RESOURCE

Citation Formats

Ng, Brenda M., Kaplan, Alan D., Ray, Priyadip, and Nguyen, Chanh P. Attend and Decode: 4D fMRI Task State Decoding Using Attention Models. Computer Software. https://github.com/LLNL/BAnD. USDOE National Nuclear Security Administration (NNSA). 07 May. 2021. Web. doi:10.11578/dc.20211222.7.
Ng, Brenda M., Kaplan, Alan D., Ray, Priyadip, & Nguyen, Chanh P. (2021, May 07). Attend and Decode: 4D fMRI Task State Decoding Using Attention Models. [Computer software]. https://github.com/LLNL/BAnD. https://doi.org/10.11578/dc.20211222.7.
Ng, Brenda M., Kaplan, Alan D., Ray, Priyadip, and Nguyen, Chanh P. "Attend and Decode: 4D fMRI Task State Decoding Using Attention Models." Computer software. May 07, 2021. https://github.com/LLNL/BAnD. https://doi.org/10.11578/dc.20211222.7.
@misc{ doecode_68570,
title = {Attend and Decode: 4D fMRI Task State Decoding Using Attention Models},
author = {Ng, Brenda M. and Kaplan, Alan D. and Ray, Priyadip and Nguyen, Chanh P.},
abstractNote = {Source code for Brain Attend and Decode paper. Functional magnetic resonance imaging (fMRI) is a neuroimaging modality that captures the blood oxygen level in a subject's brain while the subject either rests or performs a variety of functional tasks under different conditions. Given fMRI data, the problem of inferring the task, known as task state decoding, is challenging due to the high dimensionality (hundreds of million sampling points per datum) and complex spatio-temporal blood flow patterns inherent in the data. In this work, we propose to tackle the fMRI task state decoding problem by casting it as a 4D spatiotemporal classification problem. We present a novel architecture called Brain Attend and Decode (BAnD), that uses residual convolutional neural networks for spatial feature extraction and self-attention mechanisms for temporal modeling. We achieve significant performance gain compared to previous works on a 7-task benchmark from the large-scale Human Connectome Project-Young Adult (HCP-YA) dataset. We also investigate the transferability of BAnD's extracted features on unseen HCP tasks, either by freezing the spatial feature extraction layers and retraining the temporal model, or finetuning the entire model. The pre-trained features from BAnD are useful on similar tasks while finetuning them yields competitive results on unseen tasks/conditions.},
doi = {10.11578/dc.20211222.7},
url = {https://doi.org/10.11578/dc.20211222.7},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20211222.7}},
year = {2021},
month = {may}
}