skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Curve Reconstruction with Many Fewer Samples.


Abstract not provided.

; ;
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
Report Number(s):
Journal ID: ISSN 0167-7055; 647470
DOE Contract Number:
Resource Type:
Resource Relation:
Journal Volume: 35; Journal Issue: 5; Conference: Proposed for presentation at the Eurographics Symposium on Geometry Processing held June 20-24, 2016 in Berlin, Germany.
Country of Publication:
United States

Citation Formats

Stefan Ohrhallinger, Mitchell, Scott A., and Michael Wimmer. Curve Reconstruction with Many Fewer Samples.. United States: N. p., 2016. Web. doi:10.1111/cgf.12973.
Stefan Ohrhallinger, Mitchell, Scott A., & Michael Wimmer. Curve Reconstruction with Many Fewer Samples.. United States. doi:10.1111/cgf.12973.
Stefan Ohrhallinger, Mitchell, Scott A., and Michael Wimmer. 2016. "Curve Reconstruction with Many Fewer Samples.". United States. doi:10.1111/cgf.12973.
title = {Curve Reconstruction with Many Fewer Samples.},
author = {Stefan Ohrhallinger and Mitchell, Scott A. and Michael Wimmer},
abstractNote = {Abstract not provided.},
doi = {10.1111/cgf.12973},
journal = {},
number = 5,
volume = 35,
place = {United States},
year = 2016,
month = 9

Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • Abstract not provided.
  • Abstract not provided.
  • Although Moore's Law remains technically valid, the performance enhancements in computing which traditionally resulted from increased CPU speeds ended years ago. Chip manufacturers have chosen to increase the number of core CPUs per chip instead of increasing clock speed. Unfortunately, these extra CPUs do not automatically result in improvements in simulation or reconstruction times. To take advantage of this extra computing power requires changing how software is written. Event reconstruction is globally serial, in the sense that raw data has to be unpacked first, channels have to be clustered to produce hits before those hits are identified as belonging tomore » a track or shower, tracks have to be found and fit before they are vertexed, etc. However, many of the individual procedures along the reconstruction chain are intrinsically independent and are perfect candidates for optimization using multi-core architecture. Threading is perhaps the simplest approach to parallelizing a program and Java includes a powerful threading facility built into the language. We have developed a fast and flexible reconstruction package (org.lcsim) written in Java that has been used for numerous physics and detector optimization studies. In this paper we present the results of our studies on optimizing the performance of this toolkit using multiple threads on many-core architectures.« less
  • Interest in parallel architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction – the SVT algorithm – as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-offmore » in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.« less
  • The safe handling of activated samples requires containment and covering the sample to eliminate any potential for contamination. Subsequent characterization of the surface with x-rays ideally necessitates a thin film. While many films appear visually transparent, they are not necessarily x-ray transparent. Each film material has a unique beam attenuation and sometimes have amorphous peaks that can superimpose with those of the sample. To reconstruct the intensity of the underlying activated sample, the x-ray attenuation and signal due to the film needs to be removed from that of the sample. This requires the calculation of unique deconvolution parameters for themore » film. The development of a reconstruction procedure for a contained/covered sample is described.« less