Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. A Bayesian Framework for Spectral Reprojection

    Abstract Fourier partial sum approximations yield exponential accuracy for smooth and periodic functions, but produce the infamous Gibbs phenomenon for non-periodic ones. Spectral reprojection resolves the Gibbs phenomenon by projecting the Fourier partial sum onto a Gibbs complementary basis, often prescribed as the Gegenbauer polynomials. Noise in the Fourier data and the Runge phenomenon both degrade the quality of the Gegenbauer reconstruction solution, however. Motivated by its theoretical convergence properties, this paper proposes a new Bayesian framework for spectral reprojection, which allows a greater understanding of the impact of noise on the reprojection method from a statistical point of view. We are also able to improve the robustness with respect to the Gegenbauer polynomials parameters. Finally, the framework provides a mechanism to quantify the uncertainty of the solution estimate.

  2. Sequential Edge Detection Using Joint Hierarchical Bayesian Learning

    Not provided.

  3. Sparsity-based photoacoustic image reconstruction with a linear array transducer and direct measurement of the forward model

    Photoacoustic (PA) imaging is an emerging imaging technique for many clinical applications. One of the challenges posed by clinical translation is that imaging systems often rely on a finite-aperture transducer rather than a full tomography system. This results in imaging artifacts arising from an underdetermined reconstruction of the initial pressure distribution (IPD). Furthermore, clinical applications often require deep imaging, resulting in a low-signal-to-noise ratio for the acquired signal because of strong light attenuation in tissue. Conventional approaches to reconstruct the IPD, such as back projection and time-reversal, do not adequately suppress the artifacts and noise. We propose a sparsity-based optimization approach that improves the reconstruction of IPD in PA imaging with a linear array ultrasound transducer. In simulation studies, the forward model matrix was measured from k-Wave simulations, and the approach was applied to reconstruct simulated point objects and the Shepp–Logan phantom. The results were compared with the conventional back projection, time-reversal approach, frequency-domain reconstruction, and the iterative least-squares approaches. In experimental studies, the forward model of our imaging system is directly measured by scanning a graphite point source through the imaging field of view. Experimental images of graphite inclusions in tissue-mimicking phantoms are reconstructed and compared with the back projection and iterative least-squares approaches. Altogether these results show that our proposed optimization approach can leverage the sparsity of the PA images to improve the reconstruction of the IPD and outperform the existing popular reconstruction approaches.

  4. An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers

    Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.

  5. Recovering fine details from under-resolved electron tomography data using higher order total variation ℓ1 regularization

    Over the last decade or so, reconstruction methods using ℓ1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however, the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.

  6. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l1 regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.

  7. Discontinuity Detection in Multivariate Space for Stochastic Simulations

    Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.

  8. Determining the Locations and Discontinuities in the Derivatives of Functions

    We introduce a method for detecting discontinuities in piecewise smooth functions and in their derivatives. The method is constructed from a local stencil of grid point values and is based on a polynomial annihilation technique. By varying the order of the method and the arrangement of the corresponding stencils, the jump discontinuities of a function and its derivatives can be identified with high order accuracy. The method is efficient and robust and can be applied to non-uniform distributions in one dimension.

  9. Discontinuity detection in multivariate space for stochastic simulations

    Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.


Search for:
All Records
Author / Contributor
"Gelb, Anne"

Refine by:
Resource Type
Availability
Publication Date
  • 2005: 1 results
  • 2006: 1 results
  • 2007: 1 results
  • 2008: 0 results
  • 2009: 2 results
  • 2010: 0 results
  • 2011: 0 results
  • 2012: 0 results
  • 2013: 0 results
  • 2014: 0 results
  • 2015: 2 results
  • 2016: 0 results
  • 2017: 2 results
  • 2018: 1 results
  • 2019: 0 results
  • 2020: 0 results
  • 2021: 0 results
  • 2022: 0 results
  • 2023: 1 results
  • 2024: 1 results
  • 2025: 1 results
2005
2025
Author / Contributor
Research Organization