Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Scalable balanced training of conditional generative adversarial neural networks on image data

Journal Article · · Journal of Supercomputing

Here, we propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
Sponsoring Organization:
USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC)
Grant/Contract Number:
AC05-00OR22725
OSTI ID:
1783019
Journal Information:
Journal of Supercomputing, Journal Name: Journal of Supercomputing Vol. 77; ISSN 0920-8542
Publisher:
SpringerCopyright Statement
Country of Publication:
United States
Language:
English

References (5)

Cycles in adversarial regularized learning preprint January 2017
Multiagent value iteration algorithms in dynamic programming and reinforcement learning journal December 2020
Characterization and computation of local Nash equilibria in continuous games conference October 2013
Going deeper with convolutions conference June 2015
Image De-Raining Using a Conditional Generative Adversarial Network journal November 2020