ID | 63416 |
フルテキストURL | |
著者 |
Matsui, Teppei
Department of Biology, Okayama University
Taki, Masato
Graduate School of Artificial Intelligence and Science, Rikkyo University
Pham, Trung Quang
Supportive Center for Brain Research, National Institute for Physiological Sciences
Chikazoe, Junichi
Supportive Center for Brain Research, National Institute for Physiological Sciences
Jimura, Koji
Department of Biosciences and Informatics, Keio University
|
抄録 | Deep neural networks (DNNs) can accurately decode task-related information from brain activations. However, because of the non-linearity of DNNs, it is generally difficult to explain how and why they assign certain behavioral tasks to given brain activations, either correctly or incorrectly. One of the promising approaches for explaining such a black-box system is counterfactual explanation. In this framework, the behavior of a black-box system is explained by comparing real data and realistic synthetic data that are specifically generated such that the black-box system outputs an unreal outcome. The explanation of the system's decision can be explained by directly comparing the real and synthetic data. Recently, by taking advantage of advances in DNN-based image-to-image translation, several studies successfully applied counterfactual explanation to image domains. In principle, the same approach could be used in functional magnetic resonance imaging (fMRI) data. Because fMRI datasets often contain multiple classes (e.g., multiple behavioral tasks), the image-to-image transformation applicable to counterfactual explanation needs to learn mapping among multiple classes simultaneously. Recently, a new generative neural network (StarGAN) that enables image-to-image transformation among multiple classes has been developed. By adapting StarGAN with some modifications, here, we introduce a novel generative DNN (counterfactual activation generator, CAG) that can provide counterfactual explanations for DNN-based classifiers of brain activations. Importantly, CAG can simultaneously handle image transformation among all the seven classes in a publicly available fMRI dataset. Thus, CAG could provide a counterfactual explanation of DNN-based multiclass classifiers of brain activations. Furthermore, iterative applications of CAG were able to enhance and extract subtle spatial brain activity patterns that affected the classifier's decisions. Together, these results demonstrate that the counterfactual explanation based on image-to-image transformation would be a promising approach to understand and extend the current application of DNNs in fMRI analyses.
|
キーワード | fMRI
deep learning
explainable AI
decoding
generative neural network
counterfactual explanation
|
発行日 | 2022-03-16
|
出版物タイトル |
Frontiers In Neuroinformatics
|
巻 | 15巻
|
出版者 | Frontiers Media
|
開始ページ | 802938
|
ISSN | 1662-5196
|
資料タイプ |
学術雑誌論文
|
言語 |
英語
|
OAI-PMH Set |
岡山大学
|
著作権者 | © 2022 Matsui, Taki, Pham, Chikazoe and Jimura.
|
論文のバージョン | publisher
|
PubMed ID | |
DOI | |
Web of Science KeyUT | |
関連URL | isVersionOf https://doi.org/10.3389/fninf.2021.802938
|
ライセンス | https://creativecommons.org/licenses/by/4.0/
|