Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums.
- Sponsoring Organization:
- USDOE
- Grant/Contract Number:
- SC0021142
- OSTI ID:
- 1886960
- Journal Information:
- Algorithms, Journal Name: Algorithms Journal Issue: 9 Vol. 15; ISSN 1999-4893; ISSN ALGOCH
- Publisher:
- MDPI AGCopyright Statement
- Country of Publication:
- Switzerland
- Language:
- English
Similar Records
DeepONet-grid-UQ: A trustworthy deep operator framework for predicting the power grid’s post-fault trajectories
FedADMP: A Joint Anomaly Detection and Mobility Prediction Framework via Federated Learning