Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the “black-box” nature of deep learning models often raises concerns about their interpretability and reliability. In this study, we propose XElemNet to explore the interpretability of ElemNet by applying a series of explainable artificial intelligence (XAI) techniques, focusing on post-hoc analysis and model transparency. The experiments with artificial binary datasets reveal ElemNet’s effectiveness in predicting convex hulls of element-pair systems across periodic table groups, indicating its capability to effectively discern elemental interactions in most cases. Additionally, feature importance analysis within ElemNet highlights alignment with chemical properties of elements such as reactivity and electronegativity. XElemNet provides insights into the strengths and limitations of ElemNet and offers a potential pathway for explaining other deep learning models in materials science.
Wang, Kewei, et al. "XElemNet: towards explainable AI for deep neural networks in materials science." Scientific Reports, vol. 14, no. 1, Oct. 2024. https://doi.org/10.1038/s41598-024-76535-2
Wang, Kewei, Gupta, Vishu, Lee, Claire Songhyun, et al., "XElemNet: towards explainable AI for deep neural networks in materials science," Scientific Reports 14, no. 1 (2024), https://doi.org/10.1038/s41598-024-76535-2
@article{osti_2473509,
author = {Wang, Kewei and Gupta, Vishu and Lee, Claire Songhyun and Mao, Yuwei and Kilic, Muhammed Nur Talha and Li, Youjia and Huang, Zanhua and Liao, Wei-keng and Choudhary, Alok and Agrawal, Ankit},
title = {XElemNet: towards explainable AI for deep neural networks in materials science},
annote = {Abstract Recent progress in deep learning has significantly impacted materials science, leading to accelerated material discovery and innovation. ElemNet, a deep neural network model that predicts formation energy from elemental compositions, exemplifies the application of deep learning techniques in this field. However, the “black-box” nature of deep learning models often raises concerns about their interpretability and reliability. In this study, we propose XElemNet to explore the interpretability of ElemNet by applying a series of explainable artificial intelligence (XAI) techniques, focusing on post-hoc analysis and model transparency. The experiments with artificial binary datasets reveal ElemNet’s effectiveness in predicting convex hulls of element-pair systems across periodic table groups, indicating its capability to effectively discern elemental interactions in most cases. Additionally, feature importance analysis within ElemNet highlights alignment with chemical properties of elements such as reactivity and electronegativity. XElemNet provides insights into the strengths and limitations of ElemNet and offers a potential pathway for explaining other deep learning models in materials science.},
doi = {10.1038/s41598-024-76535-2},
url = {https://www.osti.gov/biblio/2473509},
journal = {Scientific Reports},
issn = {ISSN 2045-2322},
number = {1},
volume = {14},
place = {United Kingdom},
publisher = {Nature Publishing Group},
year = {2024},
month = {10}}