Enhancing Interpretability in Generative Modeling: Statistically Disentangled Latent Spaces Guided by Generative Factors in Scientific Datasets
Journal Article
·
· Machine Learning
- Argonne National Laboratory
- National Renewable Energy Lab., Golden, CO (United States)
This study addresses the challenge of statistically extracting generative factors from complex, high-dimensional datasets in unsupervised or semi-supervised settings. We investigate encoder-decoder-based generative models for nonlinear dimensionality reduction, focusing on disentangling low-dimensional latent variables corresponding to independent physical factors. Introducing Aux-VAE, a novel architecture within the classical Variational Autoencoder framework, we achieve disentanglement with minimal modifications to the standard VAE loss function by leveraging prior statistical knowledge through auxiliary variables. These variables guide the shaping of the latent space by aligning latent factors with learned auxiliary variables. We validate the efficacy of Aux-VAE through comparative assessments on multiple datasets, including astronomical simulations.
- Research Organization:
- National Renewable Energy Laboratory (NREL), Golden, CO (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR). Scientific Discovery through Advanced Computing (SciDAC)
- DOE Contract Number:
- AC36-08GO28308
- OSTI ID:
- 2584766
- Report Number(s):
- NREL/JA-2C00-93144
- Journal Information:
- Machine Learning, Journal Name: Machine Learning Journal Issue: 9 Vol. 114
- Country of Publication:
- United States
- Language:
- English
Similar Records
Optimizing training trajectories in variational autoencoders via latent Bayesian optimization approach*
Journal Article
·
Tue Jan 31 23:00:00 EST 2023
· Machine Learning: Science and Technology
·
OSTI ID:1923196