Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. Automatic Loss Factor Modeling and Attribution on Unlabeled PV Energy Data

    We present a novel approach for modeling the loss factors of photovoltaic power generation systems (PV systems). This method is a white-box machine learning model built on convex optimization that is fast, interpretable, and auditable. It takes as an input the measured daily energy produced by the system, over a multi-year period, and returns a multiplicative decomposition model of the daily energy signal and full attribution of the total energy loss to each feature. The methods section of this paper has two major components: (1) the description of the signal decomposition (SD) model, expressed in the SD framework, and (2) the attribution of total energy losses via Shapley values. We validate the method on synthetic and open-source data sets and compare to similar methods from the literature.

  2. Optimizing Desalination Operations for Energy Flexibility

    Despite the value of energy optimization in desalination processes, modeling dynamic operations for monthly billing periods has remained a computational challenge. This work proposes a framework for energy flexibility optimization, which includes new modeling features for independent operation of parallel skids, start-up delays associated with chemical stabilization, the consideration of industrial energy tariff structures, and inclusion of hourly electrical carbon intensities. This is done using a modular and computationally efficient formulation that guarantees a globally optimal solution with standard optimization solvers. In this study, the approach is demonstrated in two distinct case studies: a seawater desalination plant in Santa Barbara, CA, and an indirect potable reuse facility in San Jose, CA. Trends predicted from the model are validated against operational facility measurements from a demand response shutdown event. Preliminary results show that optimizing energy flexibility can result in 18.51% monthly cost savings over energy efficiency-optimized operation. The value extracted from a facility-wide shutdown during peak electricity price hours is hampered by start-up delays in post-treatment chemical stabilization. In cases in which a facility does not have much excess capacity, using a flow equalization tank or operating over a wide recovery range may be cost-effective.

  3. Measure this, not that: Optimizing the cost and model-based information content of measurements

    Model-based design of experiments (MBDoE) is a powerful framework for selecting and calibrating science-based mathematical models from data. Here, this work extends popular MBDoE workflows by proposing a convex mixed integer (non)linear programming (MINLP) to optimize the selection of measurements. The solver MindtPy is modified to support calculating the D-optimality objective and its gradient via an external package, scipy, using the grey-box module in Pyomo. The new approach is demonstrated in two case studies: estimating highly correlated kinetics from a batch reactor and estimating transport parameters in a large-scale rotary packed bed for CO2 capture. Both case studies show how examining the Pareto optimal trade-offs between information content measured by A- and D-optimality versus measurement budget offers practical guidance for selecting measurements for scientific experiments.

  4. Supporting ARPA-E Power Grid Optimization (Final Report)

    Pacific Northwest National Laboratory (PNNL), Arizona State University (ASU), Georgia Institute of Technology (Georgia Tech), Los Alamos National Laboratory (LANL), National Renewable Energy Laboratory (NREL), Texas A&M University (TAMU), The University of Texas at Austin (UT), and the University of Wisconsin-Madison (UW-M) supported the ARPA-E Grid Optimization (GO) Competition by providing a common problem formulation, data format, datasets, evaluation mechanism, scoring, rules, and results that resulted in the awarding of $$\$$9.24$ million dollars to teams from academia, industry, and national labs for solving three sets of increasingly difficult non-linear, security- constrained AC Optimal Powerflow (AC-OPF) optimization problems in order to increase the efficiency of the US Electric Grid. It is estimated that a 1% increase in efficiency can save $$\$$1$$ billion. Current industry practices typically use a linear DC model (DC-OPF) in order solve the OPF problem within the time constraints of the operation schedule. The GO Competition challenges the best power engineers, mathematicians, and computer scientists to make possible operational decisions based on accurate physical models. To accomplish this, the GO Competition created a series of Challenges and funded teams to produce the best solver. Challenge 1 was to solve the security constrained Alternating Current Optimal Power Flow (ACOPF) problem. Challenge 2 extended that to by adding adjustable transformer tap ratios, phase shifting transformers, switchable shunts, price-responsive demand, ramp rate constrained generators and loads, and fast-start unit commitment (UC). Furthermore, Challenge 2 was a maximization problem while Challenge 1 was a minimization problem. While Challenge 3 was being developed, the entrants were invited to find better solutions to the Challenge 2 synthetic datasets with no restrictions on time, hardware, or algorithms. The Challenge 2 solutions turned out to be very good. Challenge 3 expanded the Challenge 2 problem further by using multiperiod dynamic markets, including advisory models for extreme weather events, day-ahead markets, and the real-time markets with an extended look-ahead. These problems included active bid-in demand and topology optimization. Together the Challenges used nearly 30 million CPU hours. Since each team was working on the same problem, using the same data, and running on the same hardware, fair comparisons could be drawn as to the best solver. The datasets were varied enough, however, that the best solver for one dataset was not necessarily the best at another, so cumulative scores were used. The process was managed by the PNNL maintained website https://GOCompetition.energy.gov, where Entrants could find information about the problem, the data, the rules, submit their solver for evaluation, and see the scores of all the competing teams on a Leaderboard. Interest was world-wide but only American teams were eligible for prizes. The Competition has produced 34 journal articles 115 papers and been cited over 500 times in the literature, including 12 dissertations (4 from foreign countries; Columbia (2), Germany, and Italy) and 3 from the DOE ExaScale project. Software developed by Pearl Street Technologies for Challenges 1 and 2 is now deployed by Southwest Power Pool (SPP) and Midcontinent Independent Service Operator (MISO). Other teams have received inquiries from venture capitalists. Google DeepMind has thanked the Competition for making the datasets developed for the Competition public. They are using it to train machine learning models. The larger datasets have billions of unknowns to be solved for, but only a small percent matter in the final solution. Knowing what unknowns are important can dramatically speedup the solution.

  5. Parallel hybrid quantum-classical machine learning for kernelized time-series classification

    Supervised time-series classification garners widespread interest because of its applicability throughout a broad application domain including finance, astronomy, biosensors, and many others. Here, in this work, we tackle this problem with hybrid quantum-classical machine learning, deducing pairwise temporal relationships between time-series instances using a timeseries Hamiltonian kernel (TSHK). A TSHK is constructed with a sum of inner products generated by quantum states evolved using a parameterized time evolution operator. This sum is then optimally weighted using techniques derived from multiple kernel learning. Because we treat the kernel weighting step as a differentiable convex optimization problem, our method can be regarded as an end-to-end learnable hybrid quantum-classical-convex neural network, or QCC-net, whose output is a data set-generalized kernel function suitable for use in any kernelized machine learning technique such as the support vector machine (SVM). Using our TSHK as input to a SVM, we classify univariate and multivariate time-series using quantum circuit simulators and demonstrate the efficient parallel deployment of the algorithm to 127-qubit superconducting quantum processors using quantum multi-programming.

  6. Robust and Simple ADMM Penalty Parameter Selection

    We present a new method for online selection of the penalty parameter for the alternating direction method of multipliers (ADMM) algorithm. ADMM is a widely used method for solving a range of optimization problems, including those that arise in signal and image processing. In its standard form, ADMM includes a scalar hyperparameter, known as the penalty parameter, which usually has to be tuned to achieve satisfactory empirical convergence. In this work, we develop a framework for analyzing the ADMM algorithm applied to a quadratic problem as an affine fixed point iteration. Using this framework, we develop a new method for automatically tuning the penalty parameter by detecting when it has become too large or small. We analyze this and several other methods with respect to their theoretical properties, i.e., robustness to problem transformations, and empirical performance on several optimization problems. Our proposed algorithm is based on a theoretical framework with clear, explicit assumptions and approximations, is theoretically covariant/invariant to problem transformations, is simple to implement, and exhibits competitive empirical performance.

  7. Stochastic projective splitting

    Here, we present a new, stochastic variant of the projective splitting (PS) family of algorithms for inclusion problems involving the sum of any finite number of maximal monotone operators. This new variant uses a stochastic oracle to evaluate one of the operators, which is assumed to be Lipschitz continuous, and (deterministic) resolvents to process the remaining operators. Our proposal is the first version of PS with such stochastic capabilities. We envision the primary application being machine learning (ML) problems, with the method’s stochastic features facilitating “mini-batch” sampling of datasets. Since it uses a monotone operator formulation, the method can handle not only Lipschitz-smooth loss minimization, but also min–max and noncooperative game formulations, with better convergence properties than the gradient descent-ascent methods commonly applied in such settings. The proposed method can handle any number of constraints and nonsmooth regularizers via projection and proximal operators. We prove almost-sure convergence of the iterates to a solution and a convergence rate result for the expected residual, and close with numerical experiments on a distributionally robust sparse logistic regression problem.

  8. Pseudospectral convex optimization for on-ramp merging control of connected vehicles

    It can be a daunting task for human drivers to merge into highways because of the intricate vehicle negotiations and potential risk within limited time and space. Connected vehicle (CV) technologies could be a solution to this problem and offer many benefits to the road safety, traffic mobility, and energy efficiency. However, real-time optimal control of CVs is still an open challenge, due to the nonlinear vehicle dynamics, non-convex fuel consumption model, and highly dynamic uncertain inter-vehicle interactions. To tackle these issues, a novel real-time optimal control approach that balances the computational efficiency and solution optimality is proposed for the purpose of onboard application. To this end, the pseudospectral collocation method is integrated with a sequential convex programming approach to develop two new optimization algorithms, which are implemented within a model predictive control (MPC) framework to allow for real-time generation of optimal merging speed profiles. One algorithm leverages the line search technique to improve convergence, and the other benefits from the trust region method for better computational efficiency. The optimality and convergence process of both proposed algorithms are investigated by comparing their solutions with a popular non-linear solver. Furthermore, simulation results show that the proposed methods outperform the benchmark in terms of computational cost, fuel consumption, and traffic efficiency. In particular, the proposed fuel-economy merging rule can save 57.1% fuel consumption on average on four different traffic volumes. Meanwhile, the proposed optimal control algorithms can reduce 2.2% travel time on average comparing to the “first-in-first-out” merging rule.

  9. Parallel Memory-Independent Communication Bounds for SYRK

    In this paper, we focus on the parallel communication cost of multiplying a matrix with its transpose, known as a symmetric rank-k update (SYRK). SYRK requires half the computation of general matrix multiplication because of the symmetry of the output matrix. Recent work (Beaumont et al., SPAA '22) has demonstrated that the sequential I/O complexity of SYRK is also a constant factor smaller than that of general matrix multiplication. Inspired by this progress, we establish memory-independent parallel communication lower bounds for SYRK with smaller constants than general matrix multiplication, and we show that these constants are tight by presenting communication-optimal algorithms. The crux of the lower bound proof relies on extending a key geometric inequality to symmetric computations and analytically solving a constrained nonlinear optimization problem. Here, the optimal algorithms use a triangular blocking scheme for parallel distribution of the symmetric output matrix and corresponding computation.

  10. Internal calibration of transient kinetic data via machine learning

    The temporal analysis of products (TAP) reactor provides a vast amount of transient kinetic information that may be used to describe a variety of chemical features including residence time distributions, kinetic coefficients, number of active sites, reaction mechanism, etc. However, as with any measurement device, the TAP reactor signal is convoluted with noise and drift is common. In order to reduce the uncertainty of the kinetic measurement and any derived parameters or mechanisms, proper preprocessing must be performed prior to any advanced type of analysis. This preprocessing includes baseline correction, i.e., a shift in the voltage response, and calibration, i.e., a scaling of the flux response based on prior experiments. The traditional methodology of preprocessing requires significant user discretion and reliance on separate calibration experiments that may drift over time. Herein we use machine learning techniques combined with physical constraints to understand the noise and drift that is being generated within and between experiments for enhancement of the chemical kinetic signal. As such, the proposed methodology demonstrates clear benefits over the traditional preprocessing approach by eliminating the need for separate calibration experiments or heuristic input from the user.


Search for:
All Records
Subject
convex optimization

Refine by:
Resource Type
Availability
Publication Date
  • 1982: 2 results
  • 1983: 0 results
  • 1984: 1 results
  • 1985: 0 results
  • 1986: 0 results
  • 1987: 1 results
  • 1988: 0 results
  • 1989: 0 results
  • 1990: 0 results
  • 1991: 0 results
  • 1992: 0 results
  • 1993: 0 results
  • 1994: 0 results
  • 1995: 0 results
  • 1996: 0 results
  • 1997: 0 results
  • 1998: 0 results
  • 1999: 0 results
  • 2000: 0 results
  • 2001: 0 results
  • 2002: 0 results
  • 2003: 0 results
  • 2004: 0 results
  • 2005: 0 results
  • 2006: 0 results
  • 2007: 0 results
  • 2008: 0 results
  • 2009: 0 results
  • 2010: 0 results
  • 2011: 0 results
  • 2012: 1 results
  • 2013: 0 results
  • 2014: 1 results
  • 2015: 2 results
  • 2016: 7 results
  • 2017: 4 results
  • 2018: 7 results
  • 2019: 2 results
  • 2020: 6 results
  • 2021: 7 results
  • 2022: 3 results
  • 2023: 5 results
  • 2024: 7 results
1982
2024
Author / Contributor
Research Organization