Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Scalable Second Order Optimization for Machine Learning

Technical Report ·
DOI:https://doi.org/10.2172/1984057· OSTI ID:1984057
 [1]
  1. Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)

Many machine learning (ML) training tasks are essentially optimization processes that would at first glance appear eminently parallelizable and scalable. However, effective acceleration of these tasks with scalable parallel hardware has proven to be elusive. While standard methods for machine learning, e.g., stochastic gradient descent (SGD) for DNNs, tend to be resource efficient, they appear to be fundamentally sequential in nature.

Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE Laboratory Directed Research and Development (LDRD) Program
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1984057
Report Number(s):
PNNL--32925
Country of Publication:
United States
Language:
English

Similar Records

Improving Deep Neural Networks’ Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum
Journal Article · Fri Mar 24 00:00:00 EDT 2023 · IEEE Transactions on Neural Networks and Learning Systems · OSTI ID:2280651

Efficient Generalizable Deep Learning
Technical Report · Sat Sep 01 00:00:00 EDT 2018 · OSTI ID:1760400

Superconducting Hyperdimensional Associative Memory Circuit for Scalable Machine Learning
Journal Article · Mon May 01 00:00:00 EDT 2023 · IEEE Transactions on Applied Superconductivity · OSTI ID:2326172

Related Subjects